passt: Relicense to GPL 2.0, or any later version
In practical terms, passt doesn't benefit from the additional
protection offered by the AGPL over the GPL, because it's not
suitable to be executed over a computer network.
Further, restricting the distribution under the version 3 of the GPL
wouldn't provide any practical advantage either, as long as the passt
codebase is concerned, and might cause unnecessary compatibility
dilemmas.
Change licensing terms to the GNU General Public License Version 2,
or any later version, with written permission from all current and
past contributors, namely: myself, David Gibson, Laine Stump, Andrea
Bolognani, Paul Holzinger, Richard W.M. Jones, Chris Kuhn, Florian
Weimer, Giuseppe Scrivano, Stefan Hajnoczi, and Vasiliy Ulyanov.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-04-05 18:11:44 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
|
|
|
/* PASST - Plug A Simple Socket Transport
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* for qemu/UNIX domain socket mode
|
|
|
|
*
|
|
|
|
* PASTA - Pack A Subtle Tap Abstraction
|
|
|
|
* for network namespace/tap device mode
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
|
|
|
* tcp.c - TCP L2-L4 translation state machine
|
|
|
|
*
|
2022-03-15 00:07:02 +00:00
|
|
|
* Copyright (c) 2020-2022 Red Hat GmbH
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* Author: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
*/
|
|
|
|
|
|
|
|
/**
|
|
|
|
* DOC: Theory of Operation
|
|
|
|
*
|
|
|
|
*
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* PASST mode
|
|
|
|
* ==========
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
|
|
|
* This implementation maps TCP traffic between a single L2 interface (tap) and
|
|
|
|
* native TCP (L4) sockets, mimicking and reproducing as closely as possible the
|
|
|
|
* inferred behaviour of applications running on a guest, connected via said L2
|
|
|
|
* interface. Four connection flows are supported:
|
|
|
|
* - from the local host to the guest behind the tap interface:
|
|
|
|
* - this is the main use case for proxies in service meshes
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* - we bind to configured local ports, and relay traffic between L4 sockets
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* with local endpoints and the L2 interface
|
|
|
|
* - from remote hosts to the guest behind the tap interface:
|
|
|
|
* - this might be needed for services that need to be addressed directly,
|
|
|
|
* and typically configured with special port forwarding rules (which are
|
|
|
|
* not needed here)
|
|
|
|
* - we also relay traffic between L4 sockets with remote endpoints and the L2
|
|
|
|
* interface
|
|
|
|
* - from the guest to the local host:
|
|
|
|
* - this is not observed in practice, but implemented for completeness and
|
|
|
|
* transparency
|
|
|
|
* - from the guest to external hosts:
|
|
|
|
* - this might be needed for applications running on the guest that need to
|
|
|
|
* directly access internet services (e.g. NTP)
|
|
|
|
*
|
|
|
|
* Relevant goals are:
|
|
|
|
* - transparency: sockets need to behave as if guest applications were running
|
|
|
|
* directly on the host. This is achieved by:
|
|
|
|
* - avoiding port and address translations whenever possible
|
|
|
|
* - mirroring TCP dynamics by observation of socket parameters (TCP_INFO
|
|
|
|
* socket option) and TCP headers of packets coming from the tap interface,
|
tcp: Don't use TCP_WINDOW_CLAMP
On the L2 tap side, we see TCP headers and know the TCP window that the
ultimate receiver is advertising. In order to avoid unnecessary buffering
within passt/pasta (or by the kernel on passt/pasta's behalf) we attempt
to advertise that window back to the original sock-side sender using
TCP_WINDOW_CLAMP.
However, TCP_WINDOW_CLAMP just doesn't work like this. Prior to kernel
commit 3aa7857fe1d7 ("tcp: enable mid stream window clamp"), it simply
had no effect on established sockets. After that commit, it does affect
established sockets but doesn't behave the way we need:
* It appears to be designed only to shrink the window, not to allow it to
re-expand.
* More importantly, that commit has a serious bug where if the
setsockopt() is made when the existing kernel advertised window for the
socket happens to be zero, it will now become locked at zero, stopping
any further data from being received on the socket.
Since this has never worked as intended, simply remove it. It might be
possible to re-implement the intended behaviour by manipulating SO_RCVBUF,
so we leave a comment to that effect.
This kernel bug is the underlying cause of both the linked passt bug and
the linked podman bug. We attempted to fix this before with passt commit
d3192f67 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag").
However while that commit masked the bug for some cases, it didn't really
address the problem.
Fixes: d3192f67c492 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag")
Link: https://github.com/containers/podman/issues/20170
Link: https://bugs.passt.top/show_bug.cgi?id=74
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-11-09 09:54:00 +00:00
|
|
|
* reapplying those parameters in both flow directions (including TCP_MSS
|
|
|
|
* socket option)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* - simplicity: only a small subset of TCP logic is implemented here and
|
|
|
|
* delegated as much as possible to the TCP implementations of guest and host
|
|
|
|
* kernel. This is achieved by:
|
|
|
|
* - avoiding a complete TCP stack reimplementation, with a modified TCP state
|
2022-03-15 00:07:02 +00:00
|
|
|
* machine focused on the translation of observed events instead
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* - mirroring TCP dynamics as described above and hence avoiding the need for
|
|
|
|
* segmentation, explicit queueing, and reassembly of segments
|
|
|
|
* - security:
|
|
|
|
* - no dynamic memory allocation is performed
|
|
|
|
* - TODO: synflood protection
|
|
|
|
*
|
|
|
|
* Portability is limited by usage of Linux-specific socket options.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* Limits
|
|
|
|
* ------
|
|
|
|
*
|
|
|
|
* To avoid the need for dynamic memory allocation, a maximum, reasonable amount
|
2022-03-20 07:16:06 +00:00
|
|
|
* of connections is defined by TCP_MAX_CONNS (currently 128k).
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
*
|
|
|
|
* Data needs to linger on sockets as long as it's not acknowledged by the
|
|
|
|
* guest, and is read using MSG_PEEK into preallocated static buffers sized
|
2022-03-28 14:56:01 +00:00
|
|
|
* to the maximum supported window, 16 MiB ("discard" buffer, for already-sent
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
* data) plus a number of maximum-MSS-sized buffers. This imposes a practical
|
2022-03-28 14:56:01 +00:00
|
|
|
* limitation on window scaling, that is, the maximum factor is 256. Larger
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
* factors will be accepted, but resulting, larger values are never advertised
|
|
|
|
* to the other side, and not used while queueing data.
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* Ports
|
|
|
|
* -----
|
|
|
|
*
|
|
|
|
* To avoid the need for ad-hoc configuration of port forwarding or allowed
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* ports, listening sockets can be opened and bound to all unbound ports on the
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* host, as far as process capabilities allow. This service needs to be started
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* after any application proxy that needs to bind to local ports. Mapped ports
|
|
|
|
* can also be configured explicitly.
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
|
|
|
* No port translation is needed for connections initiated remotely or by the
|
|
|
|
* local host: source port from socket is reused while establishing connections
|
|
|
|
* to the guest.
|
2023-03-20 18:10:34 +00:00
|
|
|
*
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* For connections initiated by the guest, it's not possible to force the same
|
|
|
|
* source port as connections are established by the host kernel: that's the
|
|
|
|
* only port translation needed.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* Connection tracking and storage
|
|
|
|
* -------------------------------
|
|
|
|
*
|
2022-11-17 05:58:46 +00:00
|
|
|
* Connections are tracked by struct tcp_tap_conn entries in the @tc
|
|
|
|
* array, containing addresses, ports, TCP states and parameters. This
|
|
|
|
* is statically allocated and indexed by an arbitrary connection
|
|
|
|
* number. The array is compacted whenever a connection is closed, by
|
|
|
|
* remapping the highest connection index in use to the one freed up.
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
*
|
|
|
|
* References used for the epoll interface report the connection index used for
|
2022-03-15 00:07:02 +00:00
|
|
|
* the @tc array.
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
|
|
|
* IPv4 addresses are stored as IPv4-mapped IPv6 addresses to avoid the need for
|
|
|
|
* separate data structures depending on the protocol version.
|
|
|
|
*
|
|
|
|
* - Inbound connection requests (to the guest) are mapped using the triple
|
|
|
|
* < source IP address, source port, destination port >
|
|
|
|
* - Outbound connection requests (from the guest) are mapped using the triple
|
|
|
|
* < destination IP address, destination port, source port >
|
|
|
|
* where the source port is the one used by the guest, not the one used by the
|
|
|
|
* corresponding host socket
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* Initialisation
|
|
|
|
* --------------
|
|
|
|
*
|
|
|
|
* Up to 2^15 + 2^14 listening sockets (excluding ephemeral ports, repeated for
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* IPv4 and IPv6) can be opened and bound to wildcard addresses. Some will fail
|
|
|
|
* to bind (for low ports, or ports already bound, e.g. by a proxy). These are
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* added to the epoll list, with no separate storage.
|
|
|
|
*
|
|
|
|
*
|
2022-03-15 00:07:02 +00:00
|
|
|
* Events and states
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* -----------------
|
|
|
|
*
|
2022-03-15 00:07:02 +00:00
|
|
|
* Instead of tracking connection states using a state machine, connection
|
|
|
|
* events are used to determine state and actions for a given connection. This
|
|
|
|
* makes the implementation simpler as most of the relevant tasks deal with
|
|
|
|
* reactions to events, rather than state-associated actions. For user
|
|
|
|
* convenience, approximate states are mapped in logs from events by
|
|
|
|
* @tcp_state_str.
|
|
|
|
*
|
|
|
|
* The events are:
|
|
|
|
*
|
|
|
|
* - SOCK_ACCEPTED connection accepted from socket, SYN sent to tap/guest
|
|
|
|
*
|
|
|
|
* - TAP_SYN_RCVD tap/guest initiated connection, SYN received
|
|
|
|
*
|
|
|
|
* - TAP_SYN_ACK_SENT SYN, ACK sent to tap/guest, valid for TAP_SYN_RCVD only
|
|
|
|
*
|
|
|
|
* - ESTABLISHED connection established, the following events are valid:
|
|
|
|
*
|
|
|
|
* - SOCK_FIN_RCVD FIN (EPOLLRDHUP) received from socket
|
|
|
|
*
|
|
|
|
* - SOCK_FIN_SENT FIN (write shutdown) sent to socket
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
2022-03-15 00:07:02 +00:00
|
|
|
* - TAP_FIN_RCVD FIN received from tap/guest
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
2022-03-15 00:07:02 +00:00
|
|
|
* - TAP_FIN_SENT FIN sent to tap/guest
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
*
|
2022-03-15 00:07:02 +00:00
|
|
|
* - TAP_FIN_ACKED ACK to FIN seen from tap/guest
|
|
|
|
*
|
|
|
|
* Setting any event in CONN_STATE_BITS (SOCK_ACCEPTED, TAP_SYN_RCVD,
|
|
|
|
* ESTABLISHED) clears all the other events, as those represent the fundamental
|
|
|
|
* connection states. No events (events == CLOSED) means the connection is
|
|
|
|
* closed.
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
|
|
|
* Connection setup
|
|
|
|
* ----------------
|
|
|
|
*
|
|
|
|
* - inbound connection (from socket to guest): on accept() from listening
|
|
|
|
* socket, the new socket is mapped in connection tracking table, and
|
|
|
|
* three-way handshake initiated towards the guest, advertising MSS and window
|
|
|
|
* size and scaling from socket parameters
|
|
|
|
* - outbound connection (from guest to socket): on SYN segment from guest, a
|
|
|
|
* new socket is created and mapped in connection tracking table, setting
|
|
|
|
* MSS and window clamping from header and option of the observed SYN segment
|
|
|
|
*
|
2023-03-20 18:10:34 +00:00
|
|
|
*
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* Aging and timeout
|
|
|
|
* -----------------
|
|
|
|
*
|
2022-03-18 11:18:19 +00:00
|
|
|
* Timeouts are implemented by means of timerfd timers, set based on flags:
|
2022-03-15 00:07:02 +00:00
|
|
|
*
|
2022-03-18 11:18:19 +00:00
|
|
|
* - SYN_TIMEOUT: if no ACK is received from tap/guest during handshake (flag
|
|
|
|
* ACK_FROM_TAP_DUE without ESTABLISHED event) within this time, reset the
|
|
|
|
* connection
|
2022-03-15 00:07:02 +00:00
|
|
|
*
|
|
|
|
* - ACK_TIMEOUT: if no ACK segment was received from tap/guest, after sending
|
2022-03-18 11:18:19 +00:00
|
|
|
* data (flag ACK_FROM_TAP_DUE with ESTABLISHED event), re-send data from the
|
|
|
|
* socket and reset sequence to what was acknowledged. If this persists for
|
|
|
|
* more than TCP_MAX_RETRANS times in a row, reset the connection
|
2022-03-15 00:07:02 +00:00
|
|
|
*
|
2022-03-18 11:18:19 +00:00
|
|
|
* - FIN_TIMEOUT: if a FIN segment was sent to tap/guest (flag ACK_FROM_TAP_DUE
|
|
|
|
* with TAP_FIN_SENT event), and no ACK is received within this time, reset
|
|
|
|
* the connection
|
2022-03-15 00:07:02 +00:00
|
|
|
*
|
2022-03-18 11:18:19 +00:00
|
|
|
* - FIN_TIMEOUT: if a FIN segment was acknowledged by tap/guest and a FIN
|
|
|
|
* segment (write shutdown) was sent via socket (events SOCK_FIN_SENT and
|
|
|
|
* TAP_FIN_ACKED), but no socket activity is detected from the socket within
|
|
|
|
* this time, reset the connection
|
2022-03-15 00:07:02 +00:00
|
|
|
*
|
2022-03-18 11:18:19 +00:00
|
|
|
* - ACT_TIMEOUT, in the presence of any event: if no activity is detected on
|
|
|
|
* either side, the connection is reset
|
|
|
|
*
|
|
|
|
* - ACK_INTERVAL elapsed after data segment received from tap without having
|
|
|
|
* sent an ACK segment, or zero-sized window advertised to tap/guest (flag
|
|
|
|
* ACK_TO_TAP_DUE): forcibly check if an ACK segment can be sent
|
2022-03-15 00:07:02 +00:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* Summary of data flows (with ESTABLISHED event)
|
|
|
|
* ----------------------------------------------
|
|
|
|
*
|
|
|
|
* @seq_to_tap: next sequence for packets to tap/guest
|
|
|
|
* @seq_ack_from_tap: last ACK number received from tap/guest
|
|
|
|
* @seq_from_tap: next sequence for packets from tap/guest (expected)
|
|
|
|
* @seq_ack_to_tap: last ACK number sent to tap/guest
|
|
|
|
*
|
|
|
|
* @seq_init_from_tap: initial sequence number from tap/guest
|
|
|
|
* @seq_init_to_tap: initial sequence number from tap/guest
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
2022-03-20 07:16:06 +00:00
|
|
|
* @wnd_from_tap: last window size received from tap, never scaled
|
|
|
|
* @wnd_from_tap: last window size advertised from tap, never scaled
|
2022-03-15 00:07:02 +00:00
|
|
|
*
|
|
|
|
* - from socket to tap/guest:
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* - on new data from socket:
|
|
|
|
* - peek into buffer
|
2022-03-15 00:07:02 +00:00
|
|
|
* - send data to tap/guest:
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* - starting at offset (@seq_to_tap - @seq_ack_from_tap)
|
|
|
|
* - in MSS-sized segments
|
|
|
|
* - increasing @seq_to_tap at each segment
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
* - up to window (until @seq_to_tap - @seq_ack_from_tap <= @wnd_from_tap)
|
2022-03-15 00:07:02 +00:00
|
|
|
* - on read error, send RST to tap/guest, close socket
|
|
|
|
* - on zero read, send FIN to tap/guest, set TAP_FIN_SENT
|
|
|
|
* - on ACK from tap/guest:
|
|
|
|
* - set @ts_ack_from_tap
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* - check if it's the second duplicated ACK
|
|
|
|
* - consume buffer by difference between new ack_seq and @seq_ack_from_tap
|
|
|
|
* - update @seq_ack_from_tap from ack_seq in header
|
|
|
|
* - on two duplicated ACKs, reset @seq_to_tap to @seq_ack_from_tap, and
|
|
|
|
* resend with steps listed above
|
|
|
|
*
|
2022-03-15 00:07:02 +00:00
|
|
|
* - from tap/guest to socket:
|
|
|
|
* - on packet from tap/guest:
|
|
|
|
* - set @ts_tap_act
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* - check seq from header against @seq_from_tap, if data is missing, send
|
|
|
|
* two ACKs with number @seq_ack_to_tap, discard packet
|
|
|
|
* - otherwise queue data to socket, set @seq_from_tap to seq from header
|
|
|
|
* plus payload length
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
* - in ESTABLISHED state, send ACK to tap as soon as we queue to the
|
|
|
|
* socket. In other states, query socket for TCP_INFO, set
|
|
|
|
* @seq_ack_to_tap to (tcpi_bytes_acked + @seq_init_from_tap) % 2^32 and
|
2022-03-15 00:07:02 +00:00
|
|
|
* send ACK to tap/guest
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* PASTA mode
|
|
|
|
* ==========
|
|
|
|
*
|
|
|
|
* For traffic directed to TCP ports configured for mapping to the tuntap device
|
|
|
|
* in the namespace, and for non-local traffic coming from the tuntap device,
|
|
|
|
* the implementation is identical as the PASST mode described in the previous
|
|
|
|
* section.
|
|
|
|
*
|
|
|
|
* For local traffic directed to TCP ports configured for direct mapping between
|
2022-03-15 00:07:02 +00:00
|
|
|
* namespaces, see the implementation in tcp_splice.c.
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*/
|
|
|
|
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
#include <sched.h>
|
|
|
|
#include <fcntl.h>
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
#include <stdio.h>
|
2023-03-21 03:54:59 +00:00
|
|
|
#include <unistd.h>
|
2023-03-08 03:00:22 +00:00
|
|
|
#include <signal.h>
|
2021-10-19 15:28:18 +00:00
|
|
|
#include <stdlib.h>
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
#include <errno.h>
|
|
|
|
#include <limits.h>
|
|
|
|
#include <net/ethernet.h>
|
|
|
|
#include <net/if.h>
|
|
|
|
#include <netinet/in.h>
|
2021-10-21 02:26:08 +00:00
|
|
|
#include <netinet/ip.h>
|
tcp: Remove compile-time dependency on struct tcp_info version
In the Makefile we probe to create several defines based on the presence
of particular fields in struct tcp_info. These defines are used for two
purposes, neither of which they accomplish well:
1) Determining if the tcp_info fields are available at runtime. For this
purpose the defines are Just Plain Wrong, since the runtime kernel may
not be the same as the compile time kernel. We corrected this for
tcp_snd_wnd, but not for tcpi_bytes_acked or tcpi_min_rtt
2) Allowing the source to compile against older kernel headers which don't
have the fields in question. This works in theory, but it does mean
we won't be able to use the fields, even if later run against a
newer kernel. Furthermore, it's quite fragile: without much more
thorough tests of builds in different environments that we're currently
set up for, it's very easy to miss cases where we're accessing a field
without protection from an #ifdef. For example we currently access
tcpi_snd_wnd without #ifdefs in tcp_update_seqack_wnd().
Improve this with a different approach, borrowed from qemu (which has many
instances of similar problems). Don't compile against linux/tcp.h, using
netinet/tcp.h instead. Then for when we need an extension field, define
a struct tcp_info_linux, copied from the kernel, with all the fields we're
interested in. That may need updating from future kernel versions, but
only when we want to use a new extension, so it shouldn't be frequent.
This allows us to remove the HAS_SND_WND define entirely. We keep
HAS_BYTES_ACKED and HAS_MIN_RTT now, since they're used for purpose (1),
we'll fix that in a later patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Trivial grammar fixes in comments]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-10-24 04:59:20 +00:00
|
|
|
#include <netinet/tcp.h>
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
#include <stdint.h>
|
Don't create 'tap' socket for ports that are bound to loopback only
If the user specifies an explicit loopback address for a port
binding, we're going to use that address for the 'tap' socket, and
the same exact address for the 'spliced' socket (because those are,
by definition, only bound to loopback addresses).
This means that the second binding will fail, and, unexpectedly, the
port is forwarded, but via tap device, which means the source address
in the namespace won't be a loopback address.
Make it explicit under which conditions we're creating which kind of
socket, by refactoring tcp_sock_init() into two separate functions
for IPv4 and IPv6 and gathering those conditions at the beginning.
Also, don't create spliced sockets if the user specifies explicitly
a non-loopback address, those are harmless but not desired either.
Fixes: 3c6ae625101a ("conf, tcp, udp: Allow address specification for forwarded ports")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-10-12 14:48:52 +00:00
|
|
|
#include <stdbool.h>
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
#include <stddef.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <sys/epoll.h>
|
|
|
|
#include <sys/socket.h>
|
2022-03-18 11:18:19 +00:00
|
|
|
#include <sys/timerfd.h>
|
2021-03-17 09:57:36 +00:00
|
|
|
#include <sys/types.h>
|
2021-09-26 21:38:22 +00:00
|
|
|
#include <sys/uio.h>
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
#include <time.h>
|
tcp: Validate TCP endpoint addresses
TCP connections should typically not have wildcard addresses (0.0.0.0
or ::) nor a zero port number for either endpoint. It's not entirely
clear (at least to me) if it's strictly against the RFCs to do so, but
at any rate the socket interfaces often treat those values
specially[1], so it's not really possible to manipulate such
connections. Likewise they should not have broadcast or multicast
addresses for either endpoint.
However, nothing prevents a guest from creating a SYN packet with such
values, and it's not entirely clear what the effect on passt would be.
To ensure sane behaviour, explicitly check for this case and drop such
packets, logging a debug warning (we don't want a higher level,
because that would allow a guest to spam the logs).
We never expect such an address on an accept()ed socket either, but
just in case, check for it as well.
[1] Depending on context as "unknown", "match any" or "kernel, pick
something for me"
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-02-28 11:25:17 +00:00
|
|
|
#include <arpa/inet.h>
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2021-07-26 12:20:36 +00:00
|
|
|
#include "checksum.h"
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
#include "util.h"
|
2024-05-01 06:53:52 +00:00
|
|
|
#include "iov.h"
|
2024-03-06 05:58:33 +00:00
|
|
|
#include "ip.h"
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
#include "passt.h"
|
|
|
|
#include "tap.h"
|
2021-03-17 09:57:36 +00:00
|
|
|
#include "siphash.h"
|
2021-07-26 12:20:36 +00:00
|
|
|
#include "pcap.h"
|
2022-03-15 00:07:02 +00:00
|
|
|
#include "tcp_splice.h"
|
2022-09-24 07:53:15 +00:00
|
|
|
#include "log.h"
|
2022-11-17 05:58:55 +00:00
|
|
|
#include "inany.h"
|
2023-11-30 02:02:08 +00:00
|
|
|
#include "flow.h"
|
2024-11-06 06:54:14 +00:00
|
|
|
#include "linux_dep.h"
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2023-11-30 02:02:09 +00:00
|
|
|
#include "flow_table.h"
|
2024-06-13 12:36:49 +00:00
|
|
|
#include "tcp_internal.h"
|
|
|
|
#include "tcp_buf.h"
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2022-03-20 07:16:06 +00:00
|
|
|
/* MSS rounding: see SET_MSS() */
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
#define MSS_DEFAULT 536
|
tcp: Add support for kernels not exporting tcpi_snd_wnd via TCP_INFO
Before commit 8f7baad7f035 ("tcp: Add snd_wnd to TCP_INFO"), the
kernel didn't export tcpi_snd_wnd via TCP_INFO, which means we don't
know what's the window size of the receiver, socket-side.
To get TCP connections working in that case, ignore this value if
it's zero during handshake, and use the initial window value as
suggested by RFC 6928 (14 600 bytes, instead of 4 380 bytes), to
keep network performance usable.
To make the TCP dynamic responsive enough in this case, also check
the socket for available data whenever we get an ACK segment from
tap, instead of waiting until all the data from the tap is dequeued.
While at it, fix the window scaling value sent for SYN and SYN, ACK
segments: we want to increase the data pointer after writing the
option, not the value itself.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-06-08 00:20:28 +00:00
|
|
|
#define WINDOW_DEFAULT 14600 /* RFC 6928 */
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
tcp: Don't reset ACK_TO_TAP_DUE on any ACK, reschedule timer as needed
This is mostly symmetric with commit cc6d8286d104 ("tcp: Reset
ACK_FROM_TAP_DUE flag only as needed, update timer"): we shouldn't
reset the ACK_TO_TAP_DUE flag on any inbound ACK segment, but only
once we acknowledge everything we received from the guest or the
container.
If we don't, a client might unnecessarily hold off further data,
especially during slow start, and in general we won't converge to the
usable bandwidth.
This is very visible especially with traffic tests on links with
non-negligible latency, such as in the reported issue. There, a
public iperf3 server sometimes aborts the test due do what appears to
be a low iperf3's --rcv-timeout (probably less than a second). Even
if this doesn't happen, the throughput will converge to a fraction of
the usable bandwidth.
Clear ACK_TO_TAP_DUE if we acknowledged everything, set it if we
didn't, and reschedule the timer in case the flag is still set as the
timer expires.
While at it, decrease the ACK timer interval to 10ms.
A 50ms interval is short enough for any bandwidth-delay product I had
in mind (local connections, or non-local connections with limited
bandwidth), but here I am, testing 1gbps transfers to a peer with
100ms RTT.
Indeed, we could eventually make the timer interval dependent on the
current window and estimated bandwidth-delay product, but at least
for the moment being, 10ms should be long enough to avoid any
measurable syscall overhead, yet usable for any real-world
application.
Reported-by: Lukas Mrtvy <lukas.mrtvy@gmail.com>
Link: https://bugs.passt.top/show_bug.cgi?id=44
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-03-21 22:14:58 +00:00
|
|
|
#define ACK_INTERVAL 10 /* ms */
|
2022-03-18 11:18:19 +00:00
|
|
|
#define SYN_TIMEOUT 10 /* s */
|
|
|
|
#define ACK_TIMEOUT 2
|
|
|
|
#define FIN_TIMEOUT 60
|
|
|
|
#define ACT_TIMEOUT 7200
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2021-10-04 20:01:16 +00:00
|
|
|
#define LOW_RTT_TABLE_SIZE 8
|
2021-10-05 17:33:37 +00:00
|
|
|
#define LOW_RTT_THRESHOLD 10 /* us */
|
2021-10-04 20:01:16 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
#define ACK_IF_NEEDED 0 /* See tcp_send_flag() */
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
#define CONN_IS_CLOSING(conn) \
|
2024-06-06 10:09:45 +00:00
|
|
|
(((conn)->events & ESTABLISHED) && \
|
|
|
|
((conn)->events & (SOCK_FIN_RCVD | TAP_FIN_RCVD)))
|
|
|
|
#define CONN_HAS(conn, set) (((conn)->events & (set)) == (set))
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
static const char *tcp_event_str[] __attribute((__unused__)) = {
|
|
|
|
"SOCK_ACCEPTED", "TAP_SYN_RCVD", "ESTABLISHED", "TAP_SYN_ACK_SENT",
|
|
|
|
|
|
|
|
"SOCK_FIN_RCVD", "SOCK_FIN_SENT", "TAP_FIN_RCVD", "TAP_FIN_SENT",
|
|
|
|
"TAP_FIN_ACKED",
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
};
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
static const char *tcp_state_str[] __attribute((__unused__)) = {
|
|
|
|
"SYN_RCVD", "SYN_SENT", "ESTABLISHED",
|
|
|
|
"SYN_RCVD", /* approximately maps to TAP_SYN_ACK_SENT */
|
|
|
|
|
|
|
|
/* Passive close: */
|
|
|
|
"CLOSE_WAIT", "CLOSE_WAIT", "LAST_ACK", "LAST_ACK", "LAST_ACK",
|
|
|
|
/* Active close (+5): */
|
|
|
|
"CLOSING", "FIN_WAIT_1", "FIN_WAIT_1", "FIN_WAIT_2", "TIME_WAIT",
|
|
|
|
};
|
|
|
|
|
|
|
|
static const char *tcp_flag_str[] __attribute((__unused__)) = {
|
tcp: Don't use TCP_WINDOW_CLAMP
On the L2 tap side, we see TCP headers and know the TCP window that the
ultimate receiver is advertising. In order to avoid unnecessary buffering
within passt/pasta (or by the kernel on passt/pasta's behalf) we attempt
to advertise that window back to the original sock-side sender using
TCP_WINDOW_CLAMP.
However, TCP_WINDOW_CLAMP just doesn't work like this. Prior to kernel
commit 3aa7857fe1d7 ("tcp: enable mid stream window clamp"), it simply
had no effect on established sockets. After that commit, it does affect
established sockets but doesn't behave the way we need:
* It appears to be designed only to shrink the window, not to allow it to
re-expand.
* More importantly, that commit has a serious bug where if the
setsockopt() is made when the existing kernel advertised window for the
socket happens to be zero, it will now become locked at zero, stopping
any further data from being received on the socket.
Since this has never worked as intended, simply remove it. It might be
possible to re-implement the intended behaviour by manipulating SO_RCVBUF,
so we leave a comment to that effect.
This kernel bug is the underlying cause of both the linked passt bug and
the linked podman bug. We attempted to fix this before with passt commit
d3192f67 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag").
However while that commit masked the bug for some cases, it didn't really
address the problem.
Fixes: d3192f67c492 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag")
Link: https://github.com/containers/podman/issues/20170
Link: https://bugs.passt.top/show_bug.cgi?id=74
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-11-09 09:54:00 +00:00
|
|
|
"STALLED", "LOCAL", "ACTIVE_CLOSE", "ACK_TO_TAP_DUE",
|
2022-11-17 05:58:49 +00:00
|
|
|
"ACK_FROM_TAP_DUE",
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
};
|
|
|
|
|
2021-09-27 03:24:30 +00:00
|
|
|
/* Listening sockets, used for automatic port forwarding in pasta mode only */
|
2022-09-24 09:08:22 +00:00
|
|
|
static int tcp_sock_init_ext [NUM_PORTS][IP_VERSIONS];
|
|
|
|
static int tcp_sock_ns [NUM_PORTS][IP_VERSIONS];
|
2021-09-27 03:24:30 +00:00
|
|
|
|
2024-08-21 04:19:57 +00:00
|
|
|
/* Table of our guest side addresses with very low RTT (assumed to be local to
|
|
|
|
* the host), LRU
|
2023-08-22 05:29:54 +00:00
|
|
|
*/
|
2022-11-17 05:58:55 +00:00
|
|
|
static union inany_addr low_rtt_dst[LOW_RTT_TABLE_SIZE];
|
2021-10-04 20:01:16 +00:00
|
|
|
|
2024-06-13 12:36:49 +00:00
|
|
|
char tcp_buf_discard [MAX_WINDOW];
|
2021-10-05 17:46:59 +00:00
|
|
|
|
2024-07-12 19:04:49 +00:00
|
|
|
/* Does the kernel support TCP_PEEK_OFF? */
|
|
|
|
bool peek_offset_cap;
|
tcp: Remove compile-time dependency on struct tcp_info version
In the Makefile we probe to create several defines based on the presence
of particular fields in struct tcp_info. These defines are used for two
purposes, neither of which they accomplish well:
1) Determining if the tcp_info fields are available at runtime. For this
purpose the defines are Just Plain Wrong, since the runtime kernel may
not be the same as the compile time kernel. We corrected this for
tcp_snd_wnd, but not for tcpi_bytes_acked or tcpi_min_rtt
2) Allowing the source to compile against older kernel headers which don't
have the fields in question. This works in theory, but it does mean
we won't be able to use the fields, even if later run against a
newer kernel. Furthermore, it's quite fragile: without much more
thorough tests of builds in different environments that we're currently
set up for, it's very easy to miss cases where we're accessing a field
without protection from an #ifdef. For example we currently access
tcpi_snd_wnd without #ifdefs in tcp_update_seqack_wnd().
Improve this with a different approach, borrowed from qemu (which has many
instances of similar problems). Don't compile against linux/tcp.h, using
netinet/tcp.h instead. Then for when we need an extension field, define
a struct tcp_info_linux, copied from the kernel, with all the fields we're
interested in. That may need updating from future kernel versions, but
only when we want to use a new extension, so it shouldn't be frequent.
This allows us to remove the HAS_SND_WND define entirely. We keep
HAS_BYTES_ACKED and HAS_MIN_RTT now, since they're used for purpose (1),
we'll fix that in a later patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Trivial grammar fixes in comments]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-10-24 04:59:20 +00:00
|
|
|
|
2024-10-24 04:59:21 +00:00
|
|
|
/* Size of data returned by TCP_INFO getsockopt() */
|
|
|
|
socklen_t tcp_info_size;
|
|
|
|
|
|
|
|
#define tcp_info_cap(f_) \
|
|
|
|
((offsetof(struct tcp_info_linux, tcpi_##f_) + \
|
|
|
|
sizeof(((struct tcp_info_linux *)NULL)->tcpi_##f_)) <= tcp_info_size)
|
|
|
|
|
|
|
|
/* Kernel reports sending window in TCP_INFO (kernel commit 8f7baad7f035) */
|
|
|
|
#define snd_wnd_cap tcp_info_cap(snd_wnd)
|
2024-10-24 04:59:22 +00:00
|
|
|
/* Kernel reports bytes acked in TCP_INFO (kernel commit 0df48c26d84) */
|
|
|
|
#define bytes_acked_cap tcp_info_cap(bytes_acked)
|
|
|
|
/* Kernel reports minimum RTT in TCP_INFO (kernel commit cd9b266095f4) */
|
|
|
|
#define min_rtt_cap tcp_info_cap(min_rtt)
|
2024-07-12 19:04:49 +00:00
|
|
|
|
2024-04-15 17:01:37 +00:00
|
|
|
/* sendmsg() to socket */
|
|
|
|
static struct iovec tcp_iov [UIO_MAXIOV];
|
2021-03-17 09:57:40 +00:00
|
|
|
|
2023-02-13 23:48:21 +00:00
|
|
|
/* Pools for pre-opened sockets (in init) */
|
2022-03-15 00:07:02 +00:00
|
|
|
int init_sock_pool4 [TCP_SOCK_POOL_SIZE];
|
|
|
|
int init_sock_pool6 [TCP_SOCK_POOL_SIZE];
|
|
|
|
|
2024-07-17 04:52:18 +00:00
|
|
|
/**
|
|
|
|
* conn_at_sidx() - Get TCP connection specific flow at given sidx
|
|
|
|
* @sidx: Flow and side to retrieve
|
|
|
|
*
|
|
|
|
* Return: TCP connection at @sidx, or NULL of @sidx is invalid. Asserts if the
|
|
|
|
* flow at @sidx is not FLOW_TCP.
|
|
|
|
*/
|
|
|
|
static struct tcp_tap_conn *conn_at_sidx(flow_sidx_t sidx)
|
|
|
|
{
|
|
|
|
union flow *flow = flow_at_sidx(sidx);
|
|
|
|
|
|
|
|
if (!flow)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
ASSERT(flow->f.type == FLOW_TCP);
|
|
|
|
return &flow->tcp;
|
|
|
|
}
|
|
|
|
|
2024-07-12 19:04:49 +00:00
|
|
|
/**
|
|
|
|
* tcp_set_peek_offset() - Set SO_PEEK_OFF offset on a socket if supported
|
|
|
|
* @s: Socket to update
|
|
|
|
* @offset: Offset in bytes
|
|
|
|
*
|
|
|
|
* Return: -1 when it fails, 0 otherwise.
|
|
|
|
*/
|
|
|
|
int tcp_set_peek_offset(int s, int offset)
|
|
|
|
{
|
|
|
|
if (!peek_offset_cap)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &offset, sizeof(offset))) {
|
|
|
|
err("Failed to set SO_PEEK_OFF to %i in socket %i", offset, s);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
/**
|
|
|
|
* tcp_conn_epoll_events() - epoll events mask for given connection state
|
|
|
|
* @events: Current connection events
|
|
|
|
* @conn_flags Connection flags
|
|
|
|
*
|
|
|
|
* Return: epoll events mask corresponding to implied connection state
|
|
|
|
*/
|
|
|
|
static uint32_t tcp_conn_epoll_events(uint8_t events, uint8_t conn_flags)
|
|
|
|
{
|
|
|
|
if (!events)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (events & ESTABLISHED) {
|
|
|
|
if (events & TAP_FIN_SENT)
|
|
|
|
return EPOLLET;
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (conn_flags & STALLED)
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
return EPOLLIN | EPOLLOUT | EPOLLRDHUP | EPOLLET;
|
2022-03-15 00:07:02 +00:00
|
|
|
|
|
|
|
return EPOLLIN | EPOLLRDHUP;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (events == TAP_SYN_RCVD)
|
|
|
|
return EPOLLOUT | EPOLLET | EPOLLRDHUP;
|
|
|
|
|
tcp: Use EPOLLET for any state of not established connections
Currently, for not established connections, we monitor sockets with
edge-triggered events (EPOLLET) if we are in the TAP_SYN_RCVD state
(outbound connection being established) but not in the
TAP_SYN_ACK_SENT case of it (socket is connected, and we sent SYN,ACK
to the container/guest).
While debugging https://bugs.passt.top/show_bug.cgi?id=94, I spotted
another possibility for a short EPOLLRDHUP storm (10 seconds), which
doesn't seem to happen in actual use cases, but I could reproduce it:
start a connection from a container, while dropping (using netfilter)
ACK segments coming out of the container itself.
On the server side, outside the container, accept the connection and
shutdown the writing side of it immediately.
At this point, we're in the TAP_SYN_ACK_SENT case (not just a mere
TAP_SYN_RCVD state), we get EPOLLRDHUP from the socket, but we don't
have any reasonable way to handle it other than waiting for the tap
side to complete the three-way handshake. So we'll just keep getting
this EPOLLRDHUP until the SYN_TIMEOUT kicks in.
Always enable EPOLLET when EPOLLRDHUP is the only epoll event we
subscribe to: in this case, getting multiple EPOLLRDHUP reports is
totally useless.
In the only remaining non-established state, SOCK_ACCEPTED, for
inbound connections, we're anyway discarding EPOLLRDHUP events until
we established the conection, because we don't know what to do with
them until we get an answer from the tap side, so it's safe to enable
EPOLLET also in that case.
Link: https://bugs.passt.top/show_bug.cgi?id=94
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-09-06 08:33:55 +00:00
|
|
|
return EPOLLET | EPOLLRDHUP;
|
2022-03-15 00:07:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* tcp_epoll_ctl() - Add/modify/delete epoll state from connection events
|
|
|
|
* @c: Execution context
|
|
|
|
* @conn: Connection pointer
|
|
|
|
*
|
|
|
|
* Return: 0 on success, negative error code on failure (not on deletion)
|
|
|
|
*/
|
2022-11-17 05:58:43 +00:00
|
|
|
static int tcp_epoll_ctl(const struct ctx *c, struct tcp_tap_conn *conn)
|
2022-03-15 00:07:02 +00:00
|
|
|
{
|
2023-08-22 05:29:58 +00:00
|
|
|
int m = conn->in_epoll ? EPOLL_CTL_MOD : EPOLL_CTL_ADD;
|
2023-08-11 05:12:21 +00:00
|
|
|
union epoll_ref ref = { .type = EPOLL_TYPE_TCP, .fd = conn->sock,
|
2024-05-21 05:57:08 +00:00
|
|
|
.flowside = FLOW_SIDX(conn, !TAPSIDE(conn)), };
|
2022-03-15 00:07:02 +00:00
|
|
|
struct epoll_event ev = { .data.u64 = ref.u64 };
|
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (conn->events == CLOSED) {
|
2023-08-22 05:29:58 +00:00
|
|
|
if (conn->in_epoll)
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
epoll_ctl(c->epollfd, EPOLL_CTL_DEL, conn->sock, &ev);
|
|
|
|
if (conn->timer != -1)
|
|
|
|
epoll_ctl(c->epollfd, EPOLL_CTL_DEL, conn->timer, &ev);
|
2022-03-15 00:07:02 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
ev.events = tcp_conn_epoll_events(conn->events, conn->flags);
|
|
|
|
|
|
|
|
if (epoll_ctl(c->epollfd, m, conn->sock, &ev))
|
|
|
|
return -errno;
|
|
|
|
|
2023-08-22 05:29:58 +00:00
|
|
|
conn->in_epoll = true;
|
2022-03-15 00:07:02 +00:00
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (conn->timer != -1) {
|
2023-08-11 05:12:26 +00:00
|
|
|
union epoll_ref ref_t = { .type = EPOLL_TYPE_TCP_TIMER,
|
2023-08-11 05:12:21 +00:00
|
|
|
.fd = conn->sock,
|
2023-11-30 02:02:16 +00:00
|
|
|
.flow = FLOW_IDX(conn) };
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
struct epoll_event ev_t = { .data.u64 = ref_t.u64,
|
|
|
|
.events = EPOLLIN | EPOLLET };
|
|
|
|
|
|
|
|
if (epoll_ctl(c->epollfd, EPOLL_CTL_MOD, conn->timer, &ev_t))
|
|
|
|
return -errno;
|
|
|
|
}
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2022-03-18 11:18:19 +00:00
|
|
|
* tcp_timer_ctl() - Set timerfd based on flags/events, create timerfd if needed
|
|
|
|
* @c: Execution context
|
|
|
|
* @conn: Connection pointer
|
|
|
|
*
|
|
|
|
* #syscalls timerfd_create timerfd_settime
|
|
|
|
*/
|
2022-11-17 05:58:43 +00:00
|
|
|
static void tcp_timer_ctl(const struct ctx *c, struct tcp_tap_conn *conn)
|
2022-03-18 11:18:19 +00:00
|
|
|
{
|
|
|
|
struct itimerspec it = { { 0 }, { 0 } };
|
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (conn->events == CLOSED)
|
|
|
|
return;
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (conn->timer == -1) {
|
2023-08-11 05:12:26 +00:00
|
|
|
union epoll_ref ref = { .type = EPOLL_TYPE_TCP_TIMER,
|
2023-08-11 05:12:21 +00:00
|
|
|
.fd = conn->sock,
|
2023-11-30 02:02:16 +00:00
|
|
|
.flow = FLOW_IDX(conn) };
|
2022-03-18 11:18:19 +00:00
|
|
|
struct epoll_event ev = { .data.u64 = ref.u64,
|
|
|
|
.events = EPOLLIN | EPOLLET };
|
2022-03-20 07:16:06 +00:00
|
|
|
int fd;
|
2022-03-18 11:18:19 +00:00
|
|
|
|
2022-03-20 07:16:06 +00:00
|
|
|
fd = timerfd_create(CLOCK_MONOTONIC, 0);
|
2023-08-11 05:12:21 +00:00
|
|
|
if (fd == -1 || fd > FD_REF_MAX) {
|
2023-11-30 02:02:13 +00:00
|
|
|
flow_dbg(conn, "failed to get timer: %s",
|
|
|
|
strerror(errno));
|
2023-02-27 02:13:31 +00:00
|
|
|
if (fd > -1)
|
|
|
|
close(fd);
|
|
|
|
conn->timer = -1;
|
2022-03-18 11:18:19 +00:00
|
|
|
return;
|
|
|
|
}
|
2022-03-20 07:16:06 +00:00
|
|
|
conn->timer = fd;
|
2022-03-18 11:18:19 +00:00
|
|
|
|
|
|
|
if (epoll_ctl(c->epollfd, EPOLL_CTL_ADD, conn->timer, &ev)) {
|
2023-11-30 02:02:13 +00:00
|
|
|
flow_dbg(conn, "failed to add timer: %s",
|
|
|
|
strerror(errno));
|
2022-03-18 11:18:19 +00:00
|
|
|
close(conn->timer);
|
|
|
|
conn->timer = -1;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (conn->flags & ACK_TO_TAP_DUE) {
|
2022-03-18 11:18:19 +00:00
|
|
|
it.it_value.tv_nsec = (long)ACK_INTERVAL * 1000 * 1000;
|
|
|
|
} else if (conn->flags & ACK_FROM_TAP_DUE) {
|
|
|
|
if (!(conn->events & ESTABLISHED))
|
|
|
|
it.it_value.tv_sec = SYN_TIMEOUT;
|
|
|
|
else
|
|
|
|
it.it_value.tv_sec = ACK_TIMEOUT;
|
|
|
|
} else if (CONN_HAS(conn, SOCK_FIN_SENT | TAP_FIN_ACKED)) {
|
|
|
|
it.it_value.tv_sec = FIN_TIMEOUT;
|
|
|
|
} else {
|
|
|
|
it.it_value.tv_sec = ACT_TIMEOUT;
|
|
|
|
}
|
|
|
|
|
2023-12-24 17:13:43 +00:00
|
|
|
flow_dbg(conn, "timer expires in %llu.%03llus",
|
|
|
|
(unsigned long long)it.it_value.tv_sec,
|
|
|
|
(unsigned long long)it.it_value.tv_nsec / 1000 / 1000);
|
2022-03-18 11:18:19 +00:00
|
|
|
|
2024-10-24 22:29:50 +00:00
|
|
|
if (timerfd_settime(conn->timer, 0, &it, NULL))
|
|
|
|
flow_err(conn, "failed to set timer: %s", strerror(errno));
|
2022-03-18 11:18:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* conn_flag_do() - Set/unset given flag, log, update epoll on STALLED flag
|
2022-03-15 00:07:02 +00:00
|
|
|
* @c: Execution context
|
|
|
|
* @conn: Connection pointer
|
|
|
|
* @flag: Flag to set, or ~flag to unset
|
|
|
|
*/
|
2024-06-13 12:36:49 +00:00
|
|
|
void conn_flag_do(const struct ctx *c, struct tcp_tap_conn *conn,
|
|
|
|
unsigned long flag)
|
2022-03-15 00:07:02 +00:00
|
|
|
{
|
|
|
|
if (flag & (flag - 1)) {
|
2023-02-27 01:45:42 +00:00
|
|
|
int flag_index = fls(~flag);
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
if (!(conn->flags & ~flag))
|
|
|
|
return;
|
|
|
|
|
|
|
|
conn->flags &= flag;
|
2023-11-30 02:02:13 +00:00
|
|
|
if (flag_index >= 0)
|
|
|
|
flow_dbg(conn, "%s dropped", tcp_flag_str[flag_index]);
|
2022-03-15 00:07:02 +00:00
|
|
|
} else {
|
2023-03-21 18:39:55 +00:00
|
|
|
int flag_index = fls(flag);
|
2023-02-27 01:45:42 +00:00
|
|
|
|
tcp: Reset ACK_FROM_TAP_DUE flag only as needed, update timer
David reports that TCP transfers might stall, especially with smaller
socket buffer sizes, because we reset the ACK_FROM_TAP_DUE flag, in
tcp_tap_handler(), whenever we receive an ACK segment, regardless of
its sequence number and the fact that we might still be waiting for
one. This way, we might fail to re-transmit frames on ACK timeouts.
We need, instead, to:
- indicate with the @retrans field only re-transmissions for the same
data sequences. If we make progress, it should be reset, given that
it's used to abort a connection when we exceed a given number of
re-transmissions for the same data
- unset the ACK_FROM_TAP_DUE flag if and only if the acknowledged
sequence is the same as the last one we sent, as suggested by David
- keep it set otherwise, if progress was done but not all the data we
sent was acknowledged, and update the expiration of the ACK timeout
Add a new helper for these purposes, tcp_update_seqack_from_tap().
To extend the ACK timeout, the new helper sets the ACK_FROM_TAP_DUE
flag, even if it was already set, and conn_flag_do() triggers a timer
update. This part should be revisited at a later time, because,
strictly speaking, ACK_FROM_TAP_DUE isn't a flag anymore. One
possibility might be to introduce another connection attribute for
events affecting timer deadlines.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Link: https://bugs.passt.top/show_bug.cgi?id=41
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Fixes: be5bbb9b0681 ("tcp: Rework timers to use timerfd instead of periodic bitmap scan")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-02-12 21:26:55 +00:00
|
|
|
if (conn->flags & flag) {
|
|
|
|
/* Special case: setting ACK_FROM_TAP_DUE on a
|
|
|
|
* connection where it's already set is used to
|
|
|
|
* re-schedule the existing timer.
|
|
|
|
* TODO: define clearer semantics for timer-related
|
|
|
|
* flags and factor this into the logic below.
|
|
|
|
*/
|
|
|
|
if (flag == ACK_FROM_TAP_DUE)
|
|
|
|
tcp_timer_ctl(c, conn);
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
return;
|
tcp: Reset ACK_FROM_TAP_DUE flag only as needed, update timer
David reports that TCP transfers might stall, especially with smaller
socket buffer sizes, because we reset the ACK_FROM_TAP_DUE flag, in
tcp_tap_handler(), whenever we receive an ACK segment, regardless of
its sequence number and the fact that we might still be waiting for
one. This way, we might fail to re-transmit frames on ACK timeouts.
We need, instead, to:
- indicate with the @retrans field only re-transmissions for the same
data sequences. If we make progress, it should be reset, given that
it's used to abort a connection when we exceed a given number of
re-transmissions for the same data
- unset the ACK_FROM_TAP_DUE flag if and only if the acknowledged
sequence is the same as the last one we sent, as suggested by David
- keep it set otherwise, if progress was done but not all the data we
sent was acknowledged, and update the expiration of the ACK timeout
Add a new helper for these purposes, tcp_update_seqack_from_tap().
To extend the ACK timeout, the new helper sets the ACK_FROM_TAP_DUE
flag, even if it was already set, and conn_flag_do() triggers a timer
update. This part should be revisited at a later time, because,
strictly speaking, ACK_FROM_TAP_DUE isn't a flag anymore. One
possibility might be to introduce another connection attribute for
events affecting timer deadlines.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Link: https://bugs.passt.top/show_bug.cgi?id=41
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Fixes: be5bbb9b0681 ("tcp: Rework timers to use timerfd instead of periodic bitmap scan")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-02-12 21:26:55 +00:00
|
|
|
}
|
2022-03-15 00:07:02 +00:00
|
|
|
|
|
|
|
conn->flags |= flag;
|
2023-11-30 02:02:13 +00:00
|
|
|
if (flag_index >= 0)
|
|
|
|
flow_dbg(conn, "%s", tcp_flag_str[flag_index]);
|
2022-03-15 00:07:02 +00:00
|
|
|
}
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (flag == STALLED || flag == ~STALLED)
|
2022-03-15 00:07:02 +00:00
|
|
|
tcp_epoll_ctl(c, conn);
|
2022-03-18 11:18:19 +00:00
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (flag == ACK_FROM_TAP_DUE || flag == ACK_TO_TAP_DUE ||
|
|
|
|
(flag == ~ACK_FROM_TAP_DUE && (conn->flags & ACK_TO_TAP_DUE)) ||
|
|
|
|
(flag == ~ACK_TO_TAP_DUE && (conn->flags & ACK_FROM_TAP_DUE)))
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_timer_ctl(c, conn);
|
2022-03-15 00:07:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* conn_event_do() - Set and log connection events, update epoll state
|
|
|
|
* @c: Execution context
|
|
|
|
* @conn: Connection pointer
|
|
|
|
* @event: Connection event
|
|
|
|
*/
|
2024-06-13 12:36:49 +00:00
|
|
|
void conn_event_do(const struct ctx *c, struct tcp_tap_conn *conn,
|
|
|
|
unsigned long event)
|
2022-03-15 00:07:02 +00:00
|
|
|
{
|
|
|
|
int prev, new, num = fls(event);
|
|
|
|
|
|
|
|
if (conn->events & event)
|
|
|
|
return;
|
|
|
|
|
|
|
|
prev = fls(conn->events);
|
2022-03-18 11:18:19 +00:00
|
|
|
if (conn->flags & ACTIVE_CLOSE)
|
2022-03-15 00:07:02 +00:00
|
|
|
prev += 5;
|
|
|
|
|
|
|
|
if ((conn->events & ESTABLISHED) && (conn->events != ESTABLISHED))
|
|
|
|
prev++; /* i.e. SOCK_FIN_RCVD, not TAP_SYN_ACK_SENT */
|
|
|
|
|
|
|
|
if (event == CLOSED || (event & CONN_STATE_BITS))
|
|
|
|
conn->events = event;
|
|
|
|
else
|
|
|
|
conn->events |= event;
|
|
|
|
|
|
|
|
new = fls(conn->events);
|
|
|
|
|
|
|
|
if ((conn->events & ESTABLISHED) && (conn->events != ESTABLISHED)) {
|
|
|
|
num++;
|
|
|
|
new++;
|
|
|
|
}
|
2022-03-18 11:18:19 +00:00
|
|
|
if (conn->flags & ACTIVE_CLOSE)
|
2022-03-15 00:07:02 +00:00
|
|
|
new += 5;
|
|
|
|
|
2023-11-30 02:02:13 +00:00
|
|
|
if (prev != new)
|
|
|
|
flow_dbg(conn, "%s: %s -> %s",
|
|
|
|
num == -1 ? "CLOSED" : tcp_event_str[num],
|
|
|
|
prev == -1 ? "CLOSED" : tcp_state_str[prev],
|
|
|
|
(new == -1 || num == -1) ? "CLOSED" : tcp_state_str[new]);
|
|
|
|
else
|
|
|
|
flow_dbg(conn, "%s",
|
|
|
|
num == -1 ? "CLOSED" : tcp_event_str[num]);
|
2022-03-18 11:18:19 +00:00
|
|
|
|
2023-11-30 02:02:22 +00:00
|
|
|
if (event == CLOSED)
|
2024-07-18 05:26:35 +00:00
|
|
|
flow_hash_remove(c, TAP_SIDX(conn));
|
2023-11-30 02:02:22 +00:00
|
|
|
else if ((event == TAP_FIN_RCVD) && !(conn->events & SOCK_FIN_RCVD))
|
2022-03-18 11:18:19 +00:00
|
|
|
conn_flag(c, conn, ACTIVE_CLOSE);
|
|
|
|
else
|
|
|
|
tcp_epoll_ctl(c, conn);
|
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (CONN_HAS(conn, SOCK_FIN_SENT | TAP_FIN_ACKED))
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_timer_ctl(c, conn);
|
2022-03-15 00:07:02 +00:00
|
|
|
}
|
|
|
|
|
2021-10-04 20:01:16 +00:00
|
|
|
/**
|
|
|
|
* tcp_rtt_dst_low() - Check if low RTT was seen for connection endpoint
|
|
|
|
* @conn: Connection pointer
|
2022-03-15 00:07:02 +00:00
|
|
|
*
|
2021-10-04 20:01:16 +00:00
|
|
|
* Return: 1 if destination is in low RTT table, 0 otherwise
|
|
|
|
*/
|
2022-11-17 05:58:43 +00:00
|
|
|
static int tcp_rtt_dst_low(const struct tcp_tap_conn *conn)
|
2021-10-04 20:01:16 +00:00
|
|
|
{
|
2024-07-18 05:26:29 +00:00
|
|
|
const struct flowside *tapside = TAPFLOW(conn);
|
2021-10-04 20:01:16 +00:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < LOW_RTT_TABLE_SIZE; i++)
|
2024-08-21 04:19:57 +00:00
|
|
|
if (inany_equals(&tapside->oaddr, low_rtt_dst + i))
|
2021-10-04 20:01:16 +00:00
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* tcp_rtt_dst_check() - Check tcpi_min_rtt, insert endpoint in table if low
|
|
|
|
* @conn: Connection pointer
|
2021-10-21 07:41:13 +00:00
|
|
|
* @tinfo: Pointer to struct tcp_info for socket
|
2021-10-04 20:01:16 +00:00
|
|
|
*/
|
2022-11-17 05:58:43 +00:00
|
|
|
static void tcp_rtt_dst_check(const struct tcp_tap_conn *conn,
|
tcp: Remove compile-time dependency on struct tcp_info version
In the Makefile we probe to create several defines based on the presence
of particular fields in struct tcp_info. These defines are used for two
purposes, neither of which they accomplish well:
1) Determining if the tcp_info fields are available at runtime. For this
purpose the defines are Just Plain Wrong, since the runtime kernel may
not be the same as the compile time kernel. We corrected this for
tcp_snd_wnd, but not for tcpi_bytes_acked or tcpi_min_rtt
2) Allowing the source to compile against older kernel headers which don't
have the fields in question. This works in theory, but it does mean
we won't be able to use the fields, even if later run against a
newer kernel. Furthermore, it's quite fragile: without much more
thorough tests of builds in different environments that we're currently
set up for, it's very easy to miss cases where we're accessing a field
without protection from an #ifdef. For example we currently access
tcpi_snd_wnd without #ifdefs in tcp_update_seqack_wnd().
Improve this with a different approach, borrowed from qemu (which has many
instances of similar problems). Don't compile against linux/tcp.h, using
netinet/tcp.h instead. Then for when we need an extension field, define
a struct tcp_info_linux, copied from the kernel, with all the fields we're
interested in. That may need updating from future kernel versions, but
only when we want to use a new extension, so it shouldn't be frequent.
This allows us to remove the HAS_SND_WND define entirely. We keep
HAS_BYTES_ACKED and HAS_MIN_RTT now, since they're used for purpose (1),
we'll fix that in a later patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Trivial grammar fixes in comments]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-10-24 04:59:20 +00:00
|
|
|
const struct tcp_info_linux *tinfo)
|
2021-10-04 20:01:16 +00:00
|
|
|
{
|
2024-07-18 05:26:29 +00:00
|
|
|
const struct flowside *tapside = TAPFLOW(conn);
|
2021-10-04 20:01:16 +00:00
|
|
|
int i, hole = -1;
|
|
|
|
|
2024-10-24 04:59:22 +00:00
|
|
|
if (!min_rtt_cap ||
|
2021-10-21 07:41:13 +00:00
|
|
|
(int)tinfo->tcpi_min_rtt > LOW_RTT_THRESHOLD)
|
2021-10-04 20:01:16 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < LOW_RTT_TABLE_SIZE; i++) {
|
2024-08-21 04:19:57 +00:00
|
|
|
if (inany_equals(&tapside->oaddr, low_rtt_dst + i))
|
2021-10-04 20:01:16 +00:00
|
|
|
return;
|
|
|
|
if (hole == -1 && IN6_IS_ADDR_UNSPECIFIED(low_rtt_dst + i))
|
|
|
|
hole = i;
|
|
|
|
}
|
|
|
|
|
2022-05-20 08:36:11 +00:00
|
|
|
/* Keep gcc 12 happy: this won't actually happen because the table is
|
|
|
|
* guaranteed to have a hole, see the second memcpy() below.
|
|
|
|
*/
|
|
|
|
if (hole == -1)
|
|
|
|
return;
|
|
|
|
|
2024-08-21 04:19:57 +00:00
|
|
|
low_rtt_dst[hole++] = tapside->oaddr;
|
2021-10-04 20:01:16 +00:00
|
|
|
if (hole == LOW_RTT_TABLE_SIZE)
|
|
|
|
hole = 0;
|
2022-11-17 05:58:55 +00:00
|
|
|
inany_from_af(low_rtt_dst + hole, AF_INET6, &in6addr_any);
|
2021-10-04 20:01:16 +00:00
|
|
|
}
|
|
|
|
|
2021-10-04 19:50:05 +00:00
|
|
|
/**
|
|
|
|
* tcp_get_sndbuf() - Get, scale SO_SNDBUF between thresholds (1 to 0.5 usage)
|
|
|
|
* @conn: Connection pointer
|
|
|
|
*/
|
2022-11-17 05:58:43 +00:00
|
|
|
static void tcp_get_sndbuf(struct tcp_tap_conn *conn)
|
2021-10-04 19:50:05 +00:00
|
|
|
{
|
2021-10-21 07:41:13 +00:00
|
|
|
int s = conn->sock, sndbuf;
|
2021-10-04 19:50:05 +00:00
|
|
|
socklen_t sl;
|
2021-10-21 07:41:13 +00:00
|
|
|
uint64_t v;
|
2021-10-04 19:50:05 +00:00
|
|
|
|
2021-10-21 07:41:13 +00:00
|
|
|
sl = sizeof(sndbuf);
|
|
|
|
if (getsockopt(s, SOL_SOCKET, SO_SNDBUF, &sndbuf, &sl)) {
|
2022-03-20 07:16:06 +00:00
|
|
|
SNDBUF_SET(conn, WINDOW_DEFAULT);
|
2021-10-04 19:50:05 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-10-21 07:41:13 +00:00
|
|
|
v = sndbuf;
|
2021-10-04 19:50:05 +00:00
|
|
|
if (v >= SNDBUF_BIG)
|
|
|
|
v /= 2;
|
|
|
|
else if (v > SNDBUF_SMALL)
|
|
|
|
v -= v * (v - SNDBUF_SMALL) / (SNDBUF_BIG - SNDBUF_SMALL) / 2;
|
|
|
|
|
2022-03-20 07:16:06 +00:00
|
|
|
SNDBUF_SET(conn, MIN(INT_MAX, v));
|
2021-10-04 19:50:05 +00:00
|
|
|
}
|
|
|
|
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
/**
|
|
|
|
* tcp_sock_set_bufsize() - Set SO_RCVBUF and SO_SNDBUF to maximum values
|
|
|
|
* @s: Socket, can be -1 to avoid check in the caller
|
|
|
|
*/
|
2023-12-04 14:23:11 +00:00
|
|
|
static void tcp_sock_set_bufsize(const struct ctx *c, int s)
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
{
|
|
|
|
int v = INT_MAX / 2; /* Kernel clamps and rounds, no need to check */
|
|
|
|
|
|
|
|
if (s == -1)
|
|
|
|
return;
|
|
|
|
|
2022-04-05 05:10:30 +00:00
|
|
|
if (!c->low_rmem && setsockopt(s, SOL_SOCKET, SO_RCVBUF, &v, sizeof(v)))
|
|
|
|
trace("TCP: failed to set SO_RCVBUF to %i", v);
|
tcp: Probe net.core.{r,w}mem_max, don't set SO_{RCV,SND}BUF if low
If net.core.rmem_max and net.core.wmem_max sysctls have low values,
we can get bigger buffers by not trying to set them high -- the
kernel would lock their values to what we get.
Try, instead, to get bigger buffers by queueing as much as possible,
and if maximum values in tcp_wmem and tcp_rmem are bigger than this,
that will work.
While at it, drop QUICKACK option for non-spliced sockets, I set
that earlier by mistake.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-10-04 20:08:24 +00:00
|
|
|
|
2022-04-05 05:10:30 +00:00
|
|
|
if (!c->low_wmem && setsockopt(s, SOL_SOCKET, SO_SNDBUF, &v, sizeof(v)))
|
|
|
|
trace("TCP: failed to set SO_SNDBUF to %i", v);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
}
|
|
|
|
|
2021-07-26 12:20:36 +00:00
|
|
|
/**
|
2024-10-03 14:51:07 +00:00
|
|
|
* tcp_update_check_tcp4() - Calculate TCP checksum for IPv4
|
2024-03-06 05:58:36 +00:00
|
|
|
* @iph: IPv4 header
|
2024-10-03 14:51:07 +00:00
|
|
|
* @iov: Pointer to the array of IO vectors
|
|
|
|
* @iov_cnt: Length of the array
|
|
|
|
* @l4offset: IPv4 payload offset in the iovec array
|
2021-07-26 12:20:36 +00:00
|
|
|
*/
|
2024-11-13 08:04:04 +00:00
|
|
|
void tcp_update_check_tcp4(const struct iphdr *iph,
|
|
|
|
const struct iovec *iov, int iov_cnt,
|
|
|
|
size_t l4offset)
|
2021-07-26 12:20:36 +00:00
|
|
|
{
|
2024-05-01 06:53:49 +00:00
|
|
|
uint16_t l4len = ntohs(iph->tot_len) - sizeof(struct iphdr);
|
2024-03-06 05:58:36 +00:00
|
|
|
struct in_addr saddr = { .s_addr = iph->saddr };
|
|
|
|
struct in_addr daddr = { .s_addr = iph->daddr };
|
2024-10-03 14:51:07 +00:00
|
|
|
size_t check_ofs;
|
2024-10-30 20:31:05 +00:00
|
|
|
uint16_t *check;
|
2024-10-03 14:51:07 +00:00
|
|
|
int check_idx;
|
|
|
|
uint32_t sum;
|
|
|
|
char *ptr;
|
|
|
|
|
|
|
|
sum = proto_ipv4_header_psum(l4len, IPPROTO_TCP, saddr, daddr);
|
|
|
|
|
|
|
|
check_idx = iov_skip_bytes(iov, iov_cnt,
|
|
|
|
l4offset + offsetof(struct tcphdr, check),
|
|
|
|
&check_ofs);
|
|
|
|
|
|
|
|
if (check_idx >= iov_cnt) {
|
|
|
|
err("TCP4 buffer is too small, iov size %zd, check offset %zd",
|
|
|
|
iov_size(iov, iov_cnt),
|
|
|
|
l4offset + offsetof(struct tcphdr, check));
|
|
|
|
return;
|
|
|
|
}
|
2021-07-26 12:20:36 +00:00
|
|
|
|
2024-10-03 14:51:07 +00:00
|
|
|
if (check_ofs + sizeof(*check) > iov[check_idx].iov_len) {
|
|
|
|
err("TCP4 checksum field memory is not contiguous "
|
|
|
|
"check_ofs %zd check_idx %d iov_len %zd",
|
|
|
|
check_ofs, check_idx, iov[check_idx].iov_len);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
ptr = (char *)iov[check_idx].iov_base + check_ofs;
|
|
|
|
if ((uintptr_t)ptr & (__alignof__(*check) - 1)) {
|
|
|
|
err("TCP4 checksum field is not correctly aligned in memory");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2024-10-30 20:31:05 +00:00
|
|
|
check = (uint16_t *)ptr;
|
2024-10-03 14:51:07 +00:00
|
|
|
|
|
|
|
*check = 0;
|
|
|
|
*check = csum_iov(iov, iov_cnt, l4offset, sum);
|
2021-07-26 12:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* tcp_update_check_tcp6() - Calculate TCP checksum for IPv6
|
2024-03-06 05:58:36 +00:00
|
|
|
* @ip6h: IPv6 header
|
2024-10-03 14:51:07 +00:00
|
|
|
* @iov: Pointer to the array of IO vectors
|
|
|
|
* @iov_cnt: Length of the array
|
|
|
|
* @l4offset: IPv6 payload offset in the iovec array
|
2021-07-26 12:20:36 +00:00
|
|
|
*/
|
2024-11-13 08:04:04 +00:00
|
|
|
void tcp_update_check_tcp6(const struct ipv6hdr *ip6h,
|
|
|
|
const struct iovec *iov, int iov_cnt,
|
|
|
|
size_t l4offset)
|
2021-07-26 12:20:36 +00:00
|
|
|
{
|
2024-05-01 06:53:49 +00:00
|
|
|
uint16_t l4len = ntohs(ip6h->payload_len);
|
2024-10-03 14:51:07 +00:00
|
|
|
size_t check_ofs;
|
2024-10-30 20:31:05 +00:00
|
|
|
uint16_t *check;
|
2024-10-03 14:51:07 +00:00
|
|
|
int check_idx;
|
|
|
|
uint32_t sum;
|
|
|
|
char *ptr;
|
|
|
|
|
|
|
|
sum = proto_ipv6_header_psum(l4len, IPPROTO_TCP, &ip6h->saddr,
|
|
|
|
&ip6h->daddr);
|
|
|
|
|
|
|
|
check_idx = iov_skip_bytes(iov, iov_cnt,
|
|
|
|
l4offset + offsetof(struct tcphdr, check),
|
|
|
|
&check_ofs);
|
|
|
|
|
|
|
|
if (check_idx >= iov_cnt) {
|
|
|
|
err("TCP6 buffer is too small, iov size %zd, check offset %zd",
|
|
|
|
iov_size(iov, iov_cnt),
|
|
|
|
l4offset + offsetof(struct tcphdr, check));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (check_ofs + sizeof(*check) > iov[check_idx].iov_len) {
|
|
|
|
err("TCP6 checksum field memory is not contiguous "
|
|
|
|
"check_ofs %zd check_idx %d iov_len %zd",
|
|
|
|
check_ofs, check_idx, iov[check_idx].iov_len);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
ptr = (char *)iov[check_idx].iov_base + check_ofs;
|
|
|
|
if ((uintptr_t)ptr & (__alignof__(*check) - 1)) {
|
|
|
|
err("TCP6 checksum field is not correctly aligned in memory");
|
|
|
|
return;
|
|
|
|
}
|
2021-07-26 12:20:36 +00:00
|
|
|
|
2024-10-30 20:31:05 +00:00
|
|
|
check = (uint16_t *)ptr;
|
2024-10-03 14:51:07 +00:00
|
|
|
|
|
|
|
*check = 0;
|
|
|
|
*check = csum_iov(iov, iov_cnt, l4offset, sum);
|
2021-07-26 12:20:36 +00:00
|
|
|
}
|
|
|
|
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
/**
|
|
|
|
* tcp_opt_get() - Get option, and value if any, from TCP header
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
* @opts: Pointer to start of TCP options in header
|
|
|
|
* @len: Length of buffer, excluding TCP header -- NOT checked here!
|
2022-03-15 00:07:02 +00:00
|
|
|
* @type_find: Option type to look for
|
2021-10-21 02:26:08 +00:00
|
|
|
* @optlen_set: Optional, filled with option length if passed
|
|
|
|
* @value_set: Optional, set to start of option value if passed
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
2022-03-15 00:07:02 +00:00
|
|
|
* Return: option value, meaningful for up to 4 bytes, -1 if not found
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*/
|
2022-03-26 06:23:21 +00:00
|
|
|
static int tcp_opt_get(const char *opts, size_t len, uint8_t type_find,
|
|
|
|
uint8_t *optlen_set, const char **value_set)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
passt: Assorted fixes from "fresh eyes" review
A bunch of fixes not worth single commits at this stage, notably:
- make buffer, length parameter ordering consistent in ARP, DHCP,
NDP handlers
- strict checking of buffer, message and option length in DHCP
handler (a malicious client could have easily crashed it)
- set up forwarding for IPv4 and IPv6, and masquerading with nft for
IPv4, from demo script
- get rid of separate slow and fast timers, we don't save any
overhead that way
- stricter checking of buffer lengths as passed to tap handlers
- proper dequeuing from qemu socket back-end: I accidentally trashed
messages that were bundled up together in a single tap read
operation -- the length header tells us what's the size of the next
frame, but there's no apparent limit to the number of messages we
get with one single receive
- rework some bits of the TCP state machine, now passive and active
connection closes appear to be robust -- introduce a new
FIN_WAIT_1_SOCK_FIN state indicating a FIN_WAIT_1 with a FIN flag
from socket
- streamline TCP option parsing routine
- track TCP state changes to stderr (this is temporary, proper
debugging and syslogging support pending)
- observe that multiplying a number by four might very well change
its value, and this happens to be the case for the data offset
from the TCP header as we check if it's the same as the total
length to find out if it's a duplicated ACK segment
- recent estimates suggest that the duration of a millisecond is
closer to a million nanoseconds than a thousand of them, this
trend is now reflected into the timespec_diff_ms() convenience
routine
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-21 10:33:38 +00:00
|
|
|
uint8_t type, optlen;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2023-02-27 02:05:26 +00:00
|
|
|
if (!opts || !len)
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
return -1;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
for (; len >= 2; opts += optlen, len -= optlen) {
|
|
|
|
switch (*opts) {
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
case OPT_EOL:
|
|
|
|
return -1;
|
|
|
|
case OPT_NOP:
|
passt: Assorted fixes from "fresh eyes" review
A bunch of fixes not worth single commits at this stage, notably:
- make buffer, length parameter ordering consistent in ARP, DHCP,
NDP handlers
- strict checking of buffer, message and option length in DHCP
handler (a malicious client could have easily crashed it)
- set up forwarding for IPv4 and IPv6, and masquerading with nft for
IPv4, from demo script
- get rid of separate slow and fast timers, we don't save any
overhead that way
- stricter checking of buffer lengths as passed to tap handlers
- proper dequeuing from qemu socket back-end: I accidentally trashed
messages that were bundled up together in a single tap read
operation -- the length header tells us what's the size of the next
frame, but there's no apparent limit to the number of messages we
get with one single receive
- rework some bits of the TCP state machine, now passive and active
connection closes appear to be robust -- introduce a new
FIN_WAIT_1_SOCK_FIN state indicating a FIN_WAIT_1 with a FIN flag
from socket
- streamline TCP option parsing routine
- track TCP state changes to stderr (this is temporary, proper
debugging and syslogging support pending)
- observe that multiplying a number by four might very well change
its value, and this happens to be the case for the data offset
from the TCP header as we check if it's the same as the total
length to find out if it's a duplicated ACK segment
- recent estimates suggest that the duration of a millisecond is
closer to a million nanoseconds than a thousand of them, this
trend is now reflected into the timespec_diff_ms() convenience
routine
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-21 10:33:38 +00:00
|
|
|
optlen = 1;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
break;
|
|
|
|
default:
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
type = *(opts++);
|
2023-01-04 16:31:08 +00:00
|
|
|
|
|
|
|
if (*(uint8_t *)opts < 2 || *(uint8_t *)opts > len)
|
|
|
|
return -1;
|
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
optlen = *(opts++) - 2;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
len -= 2;
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
if (type != type_find)
|
passt: Assorted fixes from "fresh eyes" review
A bunch of fixes not worth single commits at this stage, notably:
- make buffer, length parameter ordering consistent in ARP, DHCP,
NDP handlers
- strict checking of buffer, message and option length in DHCP
handler (a malicious client could have easily crashed it)
- set up forwarding for IPv4 and IPv6, and masquerading with nft for
IPv4, from demo script
- get rid of separate slow and fast timers, we don't save any
overhead that way
- stricter checking of buffer lengths as passed to tap handlers
- proper dequeuing from qemu socket back-end: I accidentally trashed
messages that were bundled up together in a single tap read
operation -- the length header tells us what's the size of the next
frame, but there's no apparent limit to the number of messages we
get with one single receive
- rework some bits of the TCP state machine, now passive and active
connection closes appear to be robust -- introduce a new
FIN_WAIT_1_SOCK_FIN state indicating a FIN_WAIT_1 with a FIN flag
from socket
- streamline TCP option parsing routine
- track TCP state changes to stderr (this is temporary, proper
debugging and syslogging support pending)
- observe that multiplying a number by four might very well change
its value, and this happens to be the case for the data offset
from the TCP header as we check if it's the same as the total
length to find out if it's a duplicated ACK segment
- recent estimates suggest that the duration of a millisecond is
closer to a million nanoseconds than a thousand of them, this
trend is now reflected into the timespec_diff_ms() convenience
routine
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-21 10:33:38 +00:00
|
|
|
break;
|
|
|
|
|
2021-10-21 02:26:08 +00:00
|
|
|
if (optlen_set)
|
|
|
|
*optlen_set = optlen;
|
|
|
|
if (value_set)
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
*value_set = opts;
|
passt: Assorted fixes from "fresh eyes" review
A bunch of fixes not worth single commits at this stage, notably:
- make buffer, length parameter ordering consistent in ARP, DHCP,
NDP handlers
- strict checking of buffer, message and option length in DHCP
handler (a malicious client could have easily crashed it)
- set up forwarding for IPv4 and IPv6, and masquerading with nft for
IPv4, from demo script
- get rid of separate slow and fast timers, we don't save any
overhead that way
- stricter checking of buffer lengths as passed to tap handlers
- proper dequeuing from qemu socket back-end: I accidentally trashed
messages that were bundled up together in a single tap read
operation -- the length header tells us what's the size of the next
frame, but there's no apparent limit to the number of messages we
get with one single receive
- rework some bits of the TCP state machine, now passive and active
connection closes appear to be robust -- introduce a new
FIN_WAIT_1_SOCK_FIN state indicating a FIN_WAIT_1 with a FIN flag
from socket
- streamline TCP option parsing routine
- track TCP state changes to stderr (this is temporary, proper
debugging and syslogging support pending)
- observe that multiplying a number by four might very well change
its value, and this happens to be the case for the data offset
from the TCP header as we check if it's the same as the total
length to find out if it's a duplicated ACK segment
- recent estimates suggest that the duration of a millisecond is
closer to a million nanoseconds than a thousand of them, this
trend is now reflected into the timespec_diff_ms() convenience
routine
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-21 10:33:38 +00:00
|
|
|
|
|
|
|
switch (optlen) {
|
|
|
|
case 0:
|
|
|
|
return 0;
|
|
|
|
case 1:
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
return *opts;
|
passt: Assorted fixes from "fresh eyes" review
A bunch of fixes not worth single commits at this stage, notably:
- make buffer, length parameter ordering consistent in ARP, DHCP,
NDP handlers
- strict checking of buffer, message and option length in DHCP
handler (a malicious client could have easily crashed it)
- set up forwarding for IPv4 and IPv6, and masquerading with nft for
IPv4, from demo script
- get rid of separate slow and fast timers, we don't save any
overhead that way
- stricter checking of buffer lengths as passed to tap handlers
- proper dequeuing from qemu socket back-end: I accidentally trashed
messages that were bundled up together in a single tap read
operation -- the length header tells us what's the size of the next
frame, but there's no apparent limit to the number of messages we
get with one single receive
- rework some bits of the TCP state machine, now passive and active
connection closes appear to be robust -- introduce a new
FIN_WAIT_1_SOCK_FIN state indicating a FIN_WAIT_1 with a FIN flag
from socket
- streamline TCP option parsing routine
- track TCP state changes to stderr (this is temporary, proper
debugging and syslogging support pending)
- observe that multiplying a number by four might very well change
its value, and this happens to be the case for the data offset
from the TCP header as we check if it's the same as the total
length to find out if it's a duplicated ACK segment
- recent estimates suggest that the duration of a millisecond is
closer to a million nanoseconds than a thousand of them, this
trend is now reflected into the timespec_diff_ms() convenience
routine
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-21 10:33:38 +00:00
|
|
|
case 2:
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
return ntohs(*(uint16_t *)opts);
|
passt: Assorted fixes from "fresh eyes" review
A bunch of fixes not worth single commits at this stage, notably:
- make buffer, length parameter ordering consistent in ARP, DHCP,
NDP handlers
- strict checking of buffer, message and option length in DHCP
handler (a malicious client could have easily crashed it)
- set up forwarding for IPv4 and IPv6, and masquerading with nft for
IPv4, from demo script
- get rid of separate slow and fast timers, we don't save any
overhead that way
- stricter checking of buffer lengths as passed to tap handlers
- proper dequeuing from qemu socket back-end: I accidentally trashed
messages that were bundled up together in a single tap read
operation -- the length header tells us what's the size of the next
frame, but there's no apparent limit to the number of messages we
get with one single receive
- rework some bits of the TCP state machine, now passive and active
connection closes appear to be robust -- introduce a new
FIN_WAIT_1_SOCK_FIN state indicating a FIN_WAIT_1 with a FIN flag
from socket
- streamline TCP option parsing routine
- track TCP state changes to stderr (this is temporary, proper
debugging and syslogging support pending)
- observe that multiplying a number by four might very well change
its value, and this happens to be the case for the data offset
from the TCP header as we check if it's the same as the total
length to find out if it's a duplicated ACK segment
- recent estimates suggest that the duration of a millisecond is
closer to a million nanoseconds than a thousand of them, this
trend is now reflected into the timespec_diff_ms() convenience
routine
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-21 10:33:38 +00:00
|
|
|
default:
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
return ntohl(*(uint32_t *)opts);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2024-01-16 00:50:34 +00:00
|
|
|
* tcp_flow_defer() - Deferred per-flow handling (clean up closed connections)
|
2024-05-21 05:57:03 +00:00
|
|
|
* @conn: Connection to handle
|
2024-01-16 00:50:42 +00:00
|
|
|
*
|
2024-05-21 05:57:03 +00:00
|
|
|
* Return: true if the connection is ready to free, false otherwise
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*/
|
2024-05-21 05:57:03 +00:00
|
|
|
bool tcp_flow_defer(const struct tcp_tap_conn *conn)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
2024-05-21 05:57:03 +00:00
|
|
|
if (conn->events != CLOSED)
|
2024-01-16 00:50:42 +00:00
|
|
|
return false;
|
2024-01-16 00:50:34 +00:00
|
|
|
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
close(conn->sock);
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (conn->timer != -1)
|
|
|
|
close(conn->timer);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2024-01-16 00:50:42 +00:00
|
|
|
return true;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
2021-10-05 17:46:59 +00:00
|
|
|
/**
|
|
|
|
* tcp_defer_handler() - Handler for TCP deferred tasks
|
|
|
|
* @c: Execution context
|
|
|
|
*/
|
2024-01-16 00:50:35 +00:00
|
|
|
/* cppcheck-suppress [constParameterPointer, unmatchedSuppression] */
|
2022-03-18 11:18:19 +00:00
|
|
|
void tcp_defer_handler(struct ctx *c)
|
2021-10-05 17:46:59 +00:00
|
|
|
{
|
2024-04-15 17:01:37 +00:00
|
|
|
tcp_payload_flush(c);
|
2021-10-05 17:46:59 +00:00
|
|
|
}
|
|
|
|
|
2024-03-06 05:58:38 +00:00
|
|
|
/**
|
|
|
|
* tcp_fill_header() - Fill the TCP header fields for a given TCP segment.
|
|
|
|
*
|
|
|
|
* @th: Pointer to the TCP header structure
|
|
|
|
* @conn: Pointer to the TCP connection structure
|
|
|
|
* @seq: Sequence number
|
|
|
|
*/
|
|
|
|
static void tcp_fill_header(struct tcphdr *th,
|
2024-07-18 05:26:29 +00:00
|
|
|
const struct tcp_tap_conn *conn, uint32_t seq)
|
2024-03-06 05:58:38 +00:00
|
|
|
{
|
2024-07-18 05:26:29 +00:00
|
|
|
const struct flowside *tapside = TAPFLOW(conn);
|
|
|
|
|
2024-08-21 04:19:57 +00:00
|
|
|
th->source = htons(tapside->oport);
|
2024-07-18 05:26:29 +00:00
|
|
|
th->dest = htons(tapside->eport);
|
2024-03-06 05:58:38 +00:00
|
|
|
th->seq = htonl(seq);
|
|
|
|
th->ack_seq = htonl(conn->seq_ack_to_tap);
|
|
|
|
if (conn->events & ESTABLISHED) {
|
|
|
|
th->window = htons(conn->wnd_to_tap);
|
|
|
|
} else {
|
|
|
|
unsigned wnd = conn->wnd_to_tap << conn->ws_to_tap;
|
|
|
|
|
|
|
|
th->window = htons(MIN(wnd, USHRT_MAX));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* tcp_fill_headers4() - Fill 802.3, IPv4, TCP headers in pre-cooked buffers
|
2024-09-18 13:13:28 +00:00
|
|
|
* @conn: Connection pointer
|
|
|
|
* @taph: tap backend specific header
|
|
|
|
* @iph: Pointer to IPv4 header
|
2024-10-03 14:51:04 +00:00
|
|
|
* @bp: Pointer to TCP header followed by TCP payload
|
2024-09-18 13:13:28 +00:00
|
|
|
* @dlen: TCP payload length
|
|
|
|
* @check: Checksum, if already known
|
|
|
|
* @seq: Sequence number for this segment
|
|
|
|
* @no_tcp_csum: Do not set TCP checksum
|
2024-03-06 05:58:38 +00:00
|
|
|
*
|
2024-05-01 06:53:50 +00:00
|
|
|
* Return: The IPv4 payload length, host order
|
2024-03-06 05:58:38 +00:00
|
|
|
*/
|
2024-11-13 08:04:04 +00:00
|
|
|
size_t tcp_fill_headers4(const struct tcp_tap_conn *conn,
|
|
|
|
struct tap_hdr *taph,
|
|
|
|
struct iphdr *iph, struct tcp_payload_t *bp,
|
|
|
|
size_t dlen, const uint16_t *check,
|
|
|
|
uint32_t seq, bool no_tcp_csum)
|
2024-03-06 05:58:38 +00:00
|
|
|
{
|
2024-07-18 05:26:29 +00:00
|
|
|
const struct flowside *tapside = TAPFLOW(conn);
|
2024-08-21 04:19:57 +00:00
|
|
|
const struct in_addr *src4 = inany_v4(&tapside->oaddr);
|
2024-07-18 05:26:30 +00:00
|
|
|
const struct in_addr *dst4 = inany_v4(&tapside->eaddr);
|
2024-10-03 14:51:04 +00:00
|
|
|
size_t l4len = dlen + sizeof(bp->th);
|
2024-05-01 06:53:49 +00:00
|
|
|
size_t l3len = l4len + sizeof(*iph);
|
2024-03-06 05:58:38 +00:00
|
|
|
|
2024-07-18 05:26:30 +00:00
|
|
|
ASSERT(src4 && dst4);
|
2024-03-06 05:58:38 +00:00
|
|
|
|
2024-05-01 06:53:49 +00:00
|
|
|
iph->tot_len = htons(l3len);
|
2024-07-18 05:26:30 +00:00
|
|
|
iph->saddr = src4->s_addr;
|
|
|
|
iph->daddr = dst4->s_addr;
|
2024-03-06 05:58:38 +00:00
|
|
|
|
|
|
|
iph->check = check ? *check :
|
2024-07-18 05:26:30 +00:00
|
|
|
csum_ip4_header(l3len, IPPROTO_TCP, *src4, *dst4);
|
2024-03-06 05:58:38 +00:00
|
|
|
|
2024-10-03 14:51:04 +00:00
|
|
|
tcp_fill_header(&bp->th, conn, seq);
|
2024-03-06 05:58:38 +00:00
|
|
|
|
2024-10-03 14:51:07 +00:00
|
|
|
if (no_tcp_csum) {
|
2024-10-03 14:51:04 +00:00
|
|
|
bp->th.check = 0;
|
2024-10-03 14:51:07 +00:00
|
|
|
} else {
|
|
|
|
const struct iovec iov = {
|
|
|
|
.iov_base = bp,
|
|
|
|
.iov_len = ntohs(iph->tot_len) - sizeof(struct iphdr),
|
|
|
|
};
|
|
|
|
|
|
|
|
tcp_update_check_tcp4(iph, &iov, 1, 0);
|
|
|
|
}
|
2024-03-06 05:58:38 +00:00
|
|
|
|
2024-05-01 06:53:53 +00:00
|
|
|
tap_hdr_update(taph, l3len + sizeof(struct ethhdr));
|
|
|
|
|
2024-05-01 06:53:50 +00:00
|
|
|
return l4len;
|
2024-03-06 05:58:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* tcp_fill_headers6() - Fill 802.3, IPv6, TCP headers in pre-cooked buffers
|
2024-09-18 13:13:28 +00:00
|
|
|
* @conn: Connection pointer
|
|
|
|
* @taph: tap backend specific header
|
|
|
|
* @ip6h: Pointer to IPv6 header
|
2024-10-03 14:51:04 +00:00
|
|
|
* @bp: Pointer to TCP header followed by TCP payload
|
2024-09-18 13:13:28 +00:00
|
|
|
* @dlen: TCP payload length
|
|
|
|
* @check: Checksum, if already known
|
|
|
|
* @seq: Sequence number for this segment
|
|
|
|
* @no_tcp_csum: Do not set TCP checksum
|
2024-03-06 05:58:38 +00:00
|
|
|
*
|
2024-05-01 06:53:50 +00:00
|
|
|
* Return: The IPv6 payload length, host order
|
2024-03-06 05:58:38 +00:00
|
|
|
*/
|
2024-11-13 08:04:04 +00:00
|
|
|
size_t tcp_fill_headers6(const struct tcp_tap_conn *conn,
|
|
|
|
struct tap_hdr *taph,
|
|
|
|
struct ipv6hdr *ip6h, struct tcp_payload_t *bp,
|
|
|
|
size_t dlen, uint32_t seq, bool no_tcp_csum)
|
2024-03-06 05:58:38 +00:00
|
|
|
{
|
2024-07-18 05:26:29 +00:00
|
|
|
const struct flowside *tapside = TAPFLOW(conn);
|
2024-10-03 14:51:04 +00:00
|
|
|
size_t l4len = dlen + sizeof(bp->th);
|
2024-03-06 05:58:38 +00:00
|
|
|
|
2024-05-01 06:53:49 +00:00
|
|
|
ip6h->payload_len = htons(l4len);
|
2024-08-21 04:19:57 +00:00
|
|
|
ip6h->saddr = tapside->oaddr.a6;
|
2024-07-18 05:26:30 +00:00
|
|
|
ip6h->daddr = tapside->eaddr.a6;
|
2024-03-06 05:58:38 +00:00
|
|
|
|
|
|
|
ip6h->hop_limit = 255;
|
|
|
|
ip6h->version = 6;
|
|
|
|
ip6h->nexthdr = IPPROTO_TCP;
|
|
|
|
|
|
|
|
ip6h->flow_lbl[0] = (conn->sock >> 16) & 0xf;
|
|
|
|
ip6h->flow_lbl[1] = (conn->sock >> 8) & 0xff;
|
|
|
|
ip6h->flow_lbl[2] = (conn->sock >> 0) & 0xff;
|
|
|
|
|
2024-10-03 14:51:04 +00:00
|
|
|
tcp_fill_header(&bp->th, conn, seq);
|
2024-03-06 05:58:38 +00:00
|
|
|
|
2024-10-03 14:51:07 +00:00
|
|
|
if (no_tcp_csum) {
|
2024-10-03 14:51:04 +00:00
|
|
|
bp->th.check = 0;
|
2024-10-03 14:51:07 +00:00
|
|
|
} else {
|
|
|
|
const struct iovec iov = {
|
|
|
|
.iov_base = bp,
|
|
|
|
.iov_len = ntohs(ip6h->payload_len)
|
|
|
|
};
|
|
|
|
|
|
|
|
tcp_update_check_tcp6(ip6h, &iov, 1, 0);
|
|
|
|
}
|
2024-03-06 05:58:38 +00:00
|
|
|
|
2024-05-01 06:53:53 +00:00
|
|
|
tap_hdr_update(taph, l4len + sizeof(*ip6h) + sizeof(struct ethhdr));
|
|
|
|
|
2024-05-01 06:53:50 +00:00
|
|
|
return l4len;
|
2024-03-06 05:58:38 +00:00
|
|
|
}
|
|
|
|
|
2021-10-05 17:46:59 +00:00
|
|
|
/**
|
|
|
|
* tcp_l2_buf_fill_headers() - Fill 802.3, IP, TCP headers in pre-cooked buffers
|
|
|
|
* @conn: Connection pointer
|
2024-04-15 17:01:37 +00:00
|
|
|
* @iov: Pointer to an array of iovec of TCP pre-cooked buffers
|
2024-05-01 06:53:49 +00:00
|
|
|
* @dlen: TCP payload length
|
2021-10-05 17:46:59 +00:00
|
|
|
* @check: Checksum, if already known
|
|
|
|
* @seq: Sequence number for this segment
|
2024-09-18 13:13:28 +00:00
|
|
|
* @no_tcp_csum: Do not set TCP checksum
|
2021-10-05 17:46:59 +00:00
|
|
|
*
|
2024-04-15 17:01:37 +00:00
|
|
|
* Return: IP payload length, host order
|
2021-10-05 17:46:59 +00:00
|
|
|
*/
|
2024-07-18 05:26:30 +00:00
|
|
|
size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn,
|
2024-06-13 12:36:49 +00:00
|
|
|
struct iovec *iov, size_t dlen,
|
2024-09-18 13:13:28 +00:00
|
|
|
const uint16_t *check, uint32_t seq,
|
|
|
|
bool no_tcp_csum)
|
2021-10-05 17:46:59 +00:00
|
|
|
{
|
2024-07-18 05:26:29 +00:00
|
|
|
const struct flowside *tapside = TAPFLOW(conn);
|
2024-08-21 04:19:57 +00:00
|
|
|
const struct in_addr *a4 = inany_v4(&tapside->oaddr);
|
2021-10-05 17:46:59 +00:00
|
|
|
|
2022-11-17 05:58:55 +00:00
|
|
|
if (a4) {
|
2024-07-18 05:26:30 +00:00
|
|
|
return tcp_fill_headers4(conn, iov[TCP_IOV_TAP].iov_base,
|
2024-05-01 06:53:53 +00:00
|
|
|
iov[TCP_IOV_IP].iov_base,
|
2024-05-01 06:53:50 +00:00
|
|
|
iov[TCP_IOV_PAYLOAD].iov_base, dlen,
|
2024-09-18 13:13:28 +00:00
|
|
|
check, seq, no_tcp_csum);
|
2021-10-05 17:46:59 +00:00
|
|
|
}
|
|
|
|
|
2024-07-18 05:26:30 +00:00
|
|
|
return tcp_fill_headers6(conn, iov[TCP_IOV_TAP].iov_base,
|
2024-05-01 06:53:53 +00:00
|
|
|
iov[TCP_IOV_IP].iov_base,
|
2024-05-01 06:53:50 +00:00
|
|
|
iov[TCP_IOV_PAYLOAD].iov_base, dlen,
|
2024-09-18 13:13:28 +00:00
|
|
|
seq, no_tcp_csum);
|
2021-10-05 17:46:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* tcp_update_seqack_wnd() - Update ACK sequence and window to guest/tap
|
|
|
|
* @c: Execution context
|
|
|
|
* @conn: Connection pointer
|
2022-03-15 00:07:02 +00:00
|
|
|
* @force_seq: Force ACK sequence to latest segment, instead of checking socket
|
2021-10-21 07:41:13 +00:00
|
|
|
* @tinfo: tcp_info from kernel, can be NULL if not pre-fetched
|
2021-10-05 17:46:59 +00:00
|
|
|
*
|
|
|
|
* Return: 1 if sequence or window were updated, 0 otherwise
|
|
|
|
*/
|
2024-06-13 12:36:49 +00:00
|
|
|
int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn,
|
tcp: Remove compile-time dependency on struct tcp_info version
In the Makefile we probe to create several defines based on the presence
of particular fields in struct tcp_info. These defines are used for two
purposes, neither of which they accomplish well:
1) Determining if the tcp_info fields are available at runtime. For this
purpose the defines are Just Plain Wrong, since the runtime kernel may
not be the same as the compile time kernel. We corrected this for
tcp_snd_wnd, but not for tcpi_bytes_acked or tcpi_min_rtt
2) Allowing the source to compile against older kernel headers which don't
have the fields in question. This works in theory, but it does mean
we won't be able to use the fields, even if later run against a
newer kernel. Furthermore, it's quite fragile: without much more
thorough tests of builds in different environments that we're currently
set up for, it's very easy to miss cases where we're accessing a field
without protection from an #ifdef. For example we currently access
tcpi_snd_wnd without #ifdefs in tcp_update_seqack_wnd().
Improve this with a different approach, borrowed from qemu (which has many
instances of similar problems). Don't compile against linux/tcp.h, using
netinet/tcp.h instead. Then for when we need an extension field, define
a struct tcp_info_linux, copied from the kernel, with all the fields we're
interested in. That may need updating from future kernel versions, but
only when we want to use a new extension, so it shouldn't be frequent.
This allows us to remove the HAS_SND_WND define entirely. We keep
HAS_BYTES_ACKED and HAS_MIN_RTT now, since they're used for purpose (1),
we'll fix that in a later patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Trivial grammar fixes in comments]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-10-24 04:59:20 +00:00
|
|
|
bool force_seq, struct tcp_info_linux *tinfo)
|
2021-10-05 17:46:59 +00:00
|
|
|
{
|
2022-03-20 07:16:06 +00:00
|
|
|
uint32_t prev_wnd_to_tap = conn->wnd_to_tap << conn->ws_to_tap;
|
2021-10-05 17:46:59 +00:00
|
|
|
uint32_t prev_ack_to_tap = conn->seq_ack_to_tap;
|
2022-09-28 04:33:23 +00:00
|
|
|
/* cppcheck-suppress [ctunullpointer, unmatchedSuppression] */
|
2021-10-21 07:41:13 +00:00
|
|
|
socklen_t sl = sizeof(*tinfo);
|
tcp: Remove compile-time dependency on struct tcp_info version
In the Makefile we probe to create several defines based on the presence
of particular fields in struct tcp_info. These defines are used for two
purposes, neither of which they accomplish well:
1) Determining if the tcp_info fields are available at runtime. For this
purpose the defines are Just Plain Wrong, since the runtime kernel may
not be the same as the compile time kernel. We corrected this for
tcp_snd_wnd, but not for tcpi_bytes_acked or tcpi_min_rtt
2) Allowing the source to compile against older kernel headers which don't
have the fields in question. This works in theory, but it does mean
we won't be able to use the fields, even if later run against a
newer kernel. Furthermore, it's quite fragile: without much more
thorough tests of builds in different environments that we're currently
set up for, it's very easy to miss cases where we're accessing a field
without protection from an #ifdef. For example we currently access
tcpi_snd_wnd without #ifdefs in tcp_update_seqack_wnd().
Improve this with a different approach, borrowed from qemu (which has many
instances of similar problems). Don't compile against linux/tcp.h, using
netinet/tcp.h instead. Then for when we need an extension field, define
a struct tcp_info_linux, copied from the kernel, with all the fields we're
interested in. That may need updating from future kernel versions, but
only when we want to use a new extension, so it shouldn't be frequent.
This allows us to remove the HAS_SND_WND define entirely. We keep
HAS_BYTES_ACKED and HAS_MIN_RTT now, since they're used for purpose (1),
we'll fix that in a later patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Trivial grammar fixes in comments]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-10-24 04:59:20 +00:00
|
|
|
struct tcp_info_linux tinfo_new;
|
2022-03-20 07:16:06 +00:00
|
|
|
uint32_t new_wnd_to_tap = prev_wnd_to_tap;
|
2021-10-05 17:46:59 +00:00
|
|
|
int s = conn->sock;
|
|
|
|
|
2024-10-24 04:59:22 +00:00
|
|
|
if (!bytes_acked_cap) {
|
2021-10-05 17:46:59 +00:00
|
|
|
conn->seq_ack_to_tap = conn->seq_from_tap;
|
|
|
|
if (SEQ_LT(conn->seq_ack_to_tap, prev_ack_to_tap))
|
|
|
|
conn->seq_ack_to_tap = prev_ack_to_tap;
|
2024-10-24 04:59:22 +00:00
|
|
|
} else {
|
|
|
|
if ((unsigned)SNDBUF_GET(conn) < SNDBUF_SMALL ||
|
|
|
|
tcp_rtt_dst_low(conn) || CONN_IS_CLOSING(conn) ||
|
|
|
|
(conn->flags & LOCAL) || force_seq) {
|
|
|
|
conn->seq_ack_to_tap = conn->seq_from_tap;
|
|
|
|
} else if (conn->seq_ack_to_tap != conn->seq_from_tap) {
|
|
|
|
if (!tinfo) {
|
|
|
|
tinfo = &tinfo_new;
|
|
|
|
if (getsockopt(s, SOL_TCP, TCP_INFO, tinfo, &sl))
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
conn->seq_ack_to_tap = tinfo->tcpi_bytes_acked +
|
|
|
|
conn->seq_init_from_tap;
|
|
|
|
|
|
|
|
if (SEQ_LT(conn->seq_ack_to_tap, prev_ack_to_tap))
|
|
|
|
conn->seq_ack_to_tap = prev_ack_to_tap;
|
|
|
|
}
|
2021-10-05 17:46:59 +00:00
|
|
|
}
|
|
|
|
|
2024-09-18 01:53:05 +00:00
|
|
|
if (!snd_wnd_cap) {
|
2021-10-05 17:46:59 +00:00
|
|
|
tcp_get_sndbuf(conn);
|
2022-03-20 07:16:06 +00:00
|
|
|
new_wnd_to_tap = MIN(SNDBUF_GET(conn), MAX_WINDOW);
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
conn->wnd_to_tap = MIN(new_wnd_to_tap >> conn->ws_to_tap,
|
|
|
|
USHRT_MAX);
|
2021-10-19 07:13:53 +00:00
|
|
|
goto out;
|
2021-10-05 17:46:59 +00:00
|
|
|
}
|
|
|
|
|
2021-10-21 07:41:13 +00:00
|
|
|
if (!tinfo) {
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (prev_wnd_to_tap > WINDOW_DEFAULT) {
|
2021-10-19 07:13:53 +00:00
|
|
|
goto out;
|
2024-09-18 01:53:06 +00:00
|
|
|
}
|
2021-10-21 07:41:13 +00:00
|
|
|
tinfo = &tinfo_new;
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (getsockopt(s, SOL_TCP, TCP_INFO, tinfo, &sl)) {
|
2021-10-19 07:13:53 +00:00
|
|
|
goto out;
|
2024-09-18 01:53:06 +00:00
|
|
|
}
|
2021-10-05 17:46:59 +00:00
|
|
|
}
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if ((conn->flags & LOCAL) || tcp_rtt_dst_low(conn)) {
|
2022-03-20 07:16:06 +00:00
|
|
|
new_wnd_to_tap = tinfo->tcpi_snd_wnd;
|
2021-10-05 17:46:59 +00:00
|
|
|
} else {
|
|
|
|
tcp_get_sndbuf(conn);
|
2022-03-20 07:16:06 +00:00
|
|
|
new_wnd_to_tap = MIN((int)tinfo->tcpi_snd_wnd,
|
|
|
|
SNDBUF_GET(conn));
|
2021-10-05 17:46:59 +00:00
|
|
|
}
|
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
new_wnd_to_tap = MIN(new_wnd_to_tap, MAX_WINDOW);
|
|
|
|
if (!(conn->events & ESTABLISHED))
|
|
|
|
new_wnd_to_tap = MAX(new_wnd_to_tap, WINDOW_DEFAULT);
|
|
|
|
|
|
|
|
conn->wnd_to_tap = MIN(new_wnd_to_tap >> conn->ws_to_tap, USHRT_MAX);
|
2021-10-05 17:46:59 +00:00
|
|
|
|
2023-09-29 05:50:22 +00:00
|
|
|
/* Certain cppcheck versions, e.g. 2.12.0 have a bug where they think
|
|
|
|
* the MIN() above restricts conn->wnd_to_tap to be zero. That's
|
|
|
|
* clearly incorrect, but until the bug is fixed, work around it.
|
|
|
|
* https://bugzilla.redhat.com/show_bug.cgi?id=2240705
|
|
|
|
* https://sourceforge.net/p/cppcheck/discussion/general/thread/f5b1a00646/
|
|
|
|
*/
|
|
|
|
/* cppcheck-suppress [knownConditionTrueFalse, unmatchedSuppression] */
|
2022-03-18 11:18:19 +00:00
|
|
|
if (!conn->wnd_to_tap)
|
|
|
|
conn_flag(c, conn, ACK_TO_TAP_DUE);
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
|
2021-10-19 07:13:53 +00:00
|
|
|
out:
|
2022-03-20 07:16:06 +00:00
|
|
|
return new_wnd_to_tap != prev_wnd_to_tap ||
|
2021-10-05 17:46:59 +00:00
|
|
|
conn->seq_ack_to_tap != prev_ack_to_tap;
|
|
|
|
}
|
|
|
|
|
tcp: Reset ACK_FROM_TAP_DUE flag only as needed, update timer
David reports that TCP transfers might stall, especially with smaller
socket buffer sizes, because we reset the ACK_FROM_TAP_DUE flag, in
tcp_tap_handler(), whenever we receive an ACK segment, regardless of
its sequence number and the fact that we might still be waiting for
one. This way, we might fail to re-transmit frames on ACK timeouts.
We need, instead, to:
- indicate with the @retrans field only re-transmissions for the same
data sequences. If we make progress, it should be reset, given that
it's used to abort a connection when we exceed a given number of
re-transmissions for the same data
- unset the ACK_FROM_TAP_DUE flag if and only if the acknowledged
sequence is the same as the last one we sent, as suggested by David
- keep it set otherwise, if progress was done but not all the data we
sent was acknowledged, and update the expiration of the ACK timeout
Add a new helper for these purposes, tcp_update_seqack_from_tap().
To extend the ACK timeout, the new helper sets the ACK_FROM_TAP_DUE
flag, even if it was already set, and conn_flag_do() triggers a timer
update. This part should be revisited at a later time, because,
strictly speaking, ACK_FROM_TAP_DUE isn't a flag anymore. One
possibility might be to introduce another connection attribute for
events affecting timer deadlines.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Link: https://bugs.passt.top/show_bug.cgi?id=41
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Fixes: be5bbb9b0681 ("tcp: Rework timers to use timerfd instead of periodic bitmap scan")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-02-12 21:26:55 +00:00
|
|
|
/**
|
|
|
|
* tcp_update_seqack_from_tap() - ACK number from tap and related flags/counters
|
|
|
|
* @c: Execution context
|
|
|
|
* @conn: Connection pointer
|
|
|
|
* @seq Current ACK sequence, host order
|
|
|
|
*/
|
|
|
|
static void tcp_update_seqack_from_tap(const struct ctx *c,
|
|
|
|
struct tcp_tap_conn *conn, uint32_t seq)
|
|
|
|
{
|
tcp: Clear ACK_FROM_TAP_DUE also on unchanged ACK sequence from peer
Since commit cc6d8286d104 ("tcp: Reset ACK_FROM_TAP_DUE flag only as
needed, update timer"), we don't clear ACK_FROM_TAP_DUE whenever we
process an ACK segment, but, more correctly, only if we're really not
waiting for a further ACK segment, that is, only if the acknowledged
sequence matches what we sent.
In the new function implementing this, tcp_update_seqack_from_tap(),
we also reset the retransmission counter and store the updated ACK
sequence. Both should be done iff forward progress is acknowledged,
implied by the fact that the new ACK sequence is greater than the
one we previously stored.
At that point, it looked natural to also include the statements that
clear and set the ACK_FROM_TAP_DUE flag inside the same conditional
block: if we're not making forward progress, the need for an ACK, or
lack thereof, should remain unchanged.
There might be cases where this isn't true, though: without the
previous commit 4e73e9bd655c ("tcp: Don't special case the handling
of the ack of a syn"), this would happen if a tap-side client
initiated a connection, and the server didn't send any data.
At that point we would never, in the established state of the
connection, call tcp_update_seqack_from_tap() with reported forward
progress.
That issue itself is fixed by the previous commit, now, but clearing
ACK_FROM_TAP_DUE only on ACK sequence progress doesn't really follow
any logic.
Clear the ACK_FROM_TAP_DUE flag regardless of reported forward
progress. If we clear it when it's already unset, conn_flag() will do
nothing with it.
This doesn't fix any known functional issue, rather a conceptual one.
Fixes: cc6d8286d104 ("tcp: Reset ACK_FROM_TAP_DUE flag only as needed, update timer")
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Analysed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-03-23 15:07:57 +00:00
|
|
|
if (seq == conn->seq_to_tap)
|
|
|
|
conn_flag(c, conn, ~ACK_FROM_TAP_DUE);
|
|
|
|
|
tcp: Reset ACK_FROM_TAP_DUE flag only as needed, update timer
David reports that TCP transfers might stall, especially with smaller
socket buffer sizes, because we reset the ACK_FROM_TAP_DUE flag, in
tcp_tap_handler(), whenever we receive an ACK segment, regardless of
its sequence number and the fact that we might still be waiting for
one. This way, we might fail to re-transmit frames on ACK timeouts.
We need, instead, to:
- indicate with the @retrans field only re-transmissions for the same
data sequences. If we make progress, it should be reset, given that
it's used to abort a connection when we exceed a given number of
re-transmissions for the same data
- unset the ACK_FROM_TAP_DUE flag if and only if the acknowledged
sequence is the same as the last one we sent, as suggested by David
- keep it set otherwise, if progress was done but not all the data we
sent was acknowledged, and update the expiration of the ACK timeout
Add a new helper for these purposes, tcp_update_seqack_from_tap().
To extend the ACK timeout, the new helper sets the ACK_FROM_TAP_DUE
flag, even if it was already set, and conn_flag_do() triggers a timer
update. This part should be revisited at a later time, because,
strictly speaking, ACK_FROM_TAP_DUE isn't a flag anymore. One
possibility might be to introduce another connection attribute for
events affecting timer deadlines.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Link: https://bugs.passt.top/show_bug.cgi?id=41
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Fixes: be5bbb9b0681 ("tcp: Rework timers to use timerfd instead of periodic bitmap scan")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-02-12 21:26:55 +00:00
|
|
|
if (SEQ_GT(seq, conn->seq_ack_from_tap)) {
|
tcp: Clear ACK_FROM_TAP_DUE also on unchanged ACK sequence from peer
Since commit cc6d8286d104 ("tcp: Reset ACK_FROM_TAP_DUE flag only as
needed, update timer"), we don't clear ACK_FROM_TAP_DUE whenever we
process an ACK segment, but, more correctly, only if we're really not
waiting for a further ACK segment, that is, only if the acknowledged
sequence matches what we sent.
In the new function implementing this, tcp_update_seqack_from_tap(),
we also reset the retransmission counter and store the updated ACK
sequence. Both should be done iff forward progress is acknowledged,
implied by the fact that the new ACK sequence is greater than the
one we previously stored.
At that point, it looked natural to also include the statements that
clear and set the ACK_FROM_TAP_DUE flag inside the same conditional
block: if we're not making forward progress, the need for an ACK, or
lack thereof, should remain unchanged.
There might be cases where this isn't true, though: without the
previous commit 4e73e9bd655c ("tcp: Don't special case the handling
of the ack of a syn"), this would happen if a tap-side client
initiated a connection, and the server didn't send any data.
At that point we would never, in the established state of the
connection, call tcp_update_seqack_from_tap() with reported forward
progress.
That issue itself is fixed by the previous commit, now, but clearing
ACK_FROM_TAP_DUE only on ACK sequence progress doesn't really follow
any logic.
Clear the ACK_FROM_TAP_DUE flag regardless of reported forward
progress. If we clear it when it's already unset, conn_flag() will do
nothing with it.
This doesn't fix any known functional issue, rather a conceptual one.
Fixes: cc6d8286d104 ("tcp: Reset ACK_FROM_TAP_DUE flag only as needed, update timer")
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Analysed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-03-23 15:07:57 +00:00
|
|
|
/* Forward progress, but more data to acknowledge: reschedule */
|
|
|
|
if (SEQ_LT(seq, conn->seq_to_tap))
|
tcp: Reset ACK_FROM_TAP_DUE flag only as needed, update timer
David reports that TCP transfers might stall, especially with smaller
socket buffer sizes, because we reset the ACK_FROM_TAP_DUE flag, in
tcp_tap_handler(), whenever we receive an ACK segment, regardless of
its sequence number and the fact that we might still be waiting for
one. This way, we might fail to re-transmit frames on ACK timeouts.
We need, instead, to:
- indicate with the @retrans field only re-transmissions for the same
data sequences. If we make progress, it should be reset, given that
it's used to abort a connection when we exceed a given number of
re-transmissions for the same data
- unset the ACK_FROM_TAP_DUE flag if and only if the acknowledged
sequence is the same as the last one we sent, as suggested by David
- keep it set otherwise, if progress was done but not all the data we
sent was acknowledged, and update the expiration of the ACK timeout
Add a new helper for these purposes, tcp_update_seqack_from_tap().
To extend the ACK timeout, the new helper sets the ACK_FROM_TAP_DUE
flag, even if it was already set, and conn_flag_do() triggers a timer
update. This part should be revisited at a later time, because,
strictly speaking, ACK_FROM_TAP_DUE isn't a flag anymore. One
possibility might be to introduce another connection attribute for
events affecting timer deadlines.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Link: https://bugs.passt.top/show_bug.cgi?id=41
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Fixes: be5bbb9b0681 ("tcp: Rework timers to use timerfd instead of periodic bitmap scan")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-02-12 21:26:55 +00:00
|
|
|
conn_flag(c, conn, ACK_FROM_TAP_DUE);
|
|
|
|
|
|
|
|
conn->retrans = 0;
|
|
|
|
conn->seq_ack_from_tap = seq;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
/**
|
2024-06-13 12:36:48 +00:00
|
|
|
* tcp_prepare_flags() - Prepare header for flags-only segment (no payload)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* @c: Execution context
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* @conn: Connection pointer
|
2022-03-15 00:07:02 +00:00
|
|
|
* @flags: TCP flags: if not set, send segment only if ACK is due
|
2024-06-13 12:36:48 +00:00
|
|
|
* @th: TCP header to update
|
|
|
|
* @data: buffer to store TCP option
|
|
|
|
* @optlen: size of the TCP option buffer (output parameter)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
2024-06-13 12:36:48 +00:00
|
|
|
* Return: < 0 error code on connection reset,
|
|
|
|
* 0 if there is no flag to send
|
|
|
|
* 1 otherwise
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
int tcp_prepare_flags(const struct ctx *c, struct tcp_tap_conn *conn,
|
2024-10-21 07:40:29 +00:00
|
|
|
int flags, struct tcphdr *th, struct tcp_syn_opts *opts,
|
2024-06-13 12:36:49 +00:00
|
|
|
size_t *optlen)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
tcp: Remove compile-time dependency on struct tcp_info version
In the Makefile we probe to create several defines based on the presence
of particular fields in struct tcp_info. These defines are used for two
purposes, neither of which they accomplish well:
1) Determining if the tcp_info fields are available at runtime. For this
purpose the defines are Just Plain Wrong, since the runtime kernel may
not be the same as the compile time kernel. We corrected this for
tcp_snd_wnd, but not for tcpi_bytes_acked or tcpi_min_rtt
2) Allowing the source to compile against older kernel headers which don't
have the fields in question. This works in theory, but it does mean
we won't be able to use the fields, even if later run against a
newer kernel. Furthermore, it's quite fragile: without much more
thorough tests of builds in different environments that we're currently
set up for, it's very easy to miss cases where we're accessing a field
without protection from an #ifdef. For example we currently access
tcpi_snd_wnd without #ifdefs in tcp_update_seqack_wnd().
Improve this with a different approach, borrowed from qemu (which has many
instances of similar problems). Don't compile against linux/tcp.h, using
netinet/tcp.h instead. Then for when we need an extension field, define
a struct tcp_info_linux, copied from the kernel, with all the fields we're
interested in. That may need updating from future kernel versions, but
only when we want to use a new extension, so it shouldn't be frequent.
This allows us to remove the HAS_SND_WND define entirely. We keep
HAS_BYTES_ACKED and HAS_MIN_RTT now, since they're used for purpose (1),
we'll fix that in a later patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Trivial grammar fixes in comments]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-10-24 04:59:20 +00:00
|
|
|
struct tcp_info_linux tinfo = { 0 };
|
2021-10-21 07:41:13 +00:00
|
|
|
socklen_t sl = sizeof(tinfo);
|
2021-10-04 19:50:05 +00:00
|
|
|
int s = conn->sock;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
if (SEQ_GE(conn->seq_ack_to_tap, conn->seq_from_tap) &&
|
|
|
|
!flags && conn->wnd_to_tap)
|
|
|
|
return 0;
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
|
2021-10-21 07:41:13 +00:00
|
|
|
if (getsockopt(s, SOL_TCP, TCP_INFO, &tinfo, &sl)) {
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
conn_event(c, conn, CLOSED);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
return -ECONNRESET;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (!(conn->flags & LOCAL))
|
2021-10-21 07:41:13 +00:00
|
|
|
tcp_rtt_dst_check(conn, &tinfo);
|
2021-10-04 20:14:13 +00:00
|
|
|
|
2024-09-18 01:53:07 +00:00
|
|
|
if (!tcp_update_seqack_wnd(c, conn, !!flags, &tinfo) && !flags)
|
2021-10-05 17:46:59 +00:00
|
|
|
return 0;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2024-06-13 12:36:48 +00:00
|
|
|
*optlen = 0;
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
if (flags & SYN) {
|
2022-01-26 05:45:31 +00:00
|
|
|
int mss;
|
2021-09-29 14:46:58 +00:00
|
|
|
|
|
|
|
if (c->mtu == -1) {
|
2021-10-21 07:41:13 +00:00
|
|
|
mss = tinfo.tcpi_snd_mss;
|
2021-09-29 14:46:58 +00:00
|
|
|
} else {
|
2021-10-19 22:05:11 +00:00
|
|
|
mss = c->mtu - sizeof(struct tcphdr);
|
2021-10-05 17:46:59 +00:00
|
|
|
if (CONN_V4(conn))
|
2021-09-29 14:46:58 +00:00
|
|
|
mss -= sizeof(struct iphdr);
|
|
|
|
else
|
|
|
|
mss -= sizeof(struct ipv6hdr);
|
2021-10-04 20:01:16 +00:00
|
|
|
|
2021-10-05 17:27:04 +00:00
|
|
|
if (c->low_wmem &&
|
2022-03-18 11:18:19 +00:00
|
|
|
!(conn->flags & LOCAL) && !tcp_rtt_dst_low(conn))
|
2021-10-04 20:01:16 +00:00
|
|
|
mss = MIN(mss, PAGE_SIZE);
|
2022-01-26 05:45:31 +00:00
|
|
|
else if (mss > PAGE_SIZE)
|
2021-10-04 20:01:16 +00:00
|
|
|
mss = ROUND_DOWN(mss, PAGE_SIZE);
|
2021-09-29 14:46:58 +00:00
|
|
|
}
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2022-03-20 07:16:06 +00:00
|
|
|
conn->ws_to_tap = MIN(MAX_WS, tinfo.tcpi_snd_wscale);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2024-10-21 07:40:29 +00:00
|
|
|
*opts = TCP_SYN_OPTS(mss, conn->ws_to_tap);
|
|
|
|
*optlen = sizeof(*opts);
|
2024-03-26 05:42:23 +00:00
|
|
|
} else if (!(flags & RST)) {
|
2024-03-26 05:42:24 +00:00
|
|
|
flags |= ACK;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
2024-06-13 12:36:48 +00:00
|
|
|
th->doff = (sizeof(*th) + *optlen) / 4;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2024-03-26 05:42:22 +00:00
|
|
|
th->ack = !!(flags & ACK);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
th->rst = !!(flags & RST);
|
|
|
|
th->syn = !!(flags & SYN);
|
|
|
|
th->fin = !!(flags & FIN);
|
|
|
|
|
tcp: Don't reset ACK_TO_TAP_DUE on any ACK, reschedule timer as needed
This is mostly symmetric with commit cc6d8286d104 ("tcp: Reset
ACK_FROM_TAP_DUE flag only as needed, update timer"): we shouldn't
reset the ACK_TO_TAP_DUE flag on any inbound ACK segment, but only
once we acknowledge everything we received from the guest or the
container.
If we don't, a client might unnecessarily hold off further data,
especially during slow start, and in general we won't converge to the
usable bandwidth.
This is very visible especially with traffic tests on links with
non-negligible latency, such as in the reported issue. There, a
public iperf3 server sometimes aborts the test due do what appears to
be a low iperf3's --rcv-timeout (probably less than a second). Even
if this doesn't happen, the throughput will converge to a fraction of
the usable bandwidth.
Clear ACK_TO_TAP_DUE if we acknowledged everything, set it if we
didn't, and reschedule the timer in case the flag is still set as the
timer expires.
While at it, decrease the ACK timer interval to 10ms.
A 50ms interval is short enough for any bandwidth-delay product I had
in mind (local connections, or non-local connections with limited
bandwidth), but here I am, testing 1gbps transfers to a peer with
100ms RTT.
Indeed, we could eventually make the timer interval dependent on the
current window and estimated bandwidth-delay product, but at least
for the moment being, 10ms should be long enough to avoid any
measurable syscall overhead, yet usable for any real-world
application.
Reported-by: Lukas Mrtvy <lukas.mrtvy@gmail.com>
Link: https://bugs.passt.top/show_bug.cgi?id=44
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-03-21 22:14:58 +00:00
|
|
|
if (th->ack) {
|
|
|
|
if (SEQ_GE(conn->seq_ack_to_tap, conn->seq_from_tap))
|
|
|
|
conn_flag(c, conn, ~ACK_TO_TAP_DUE);
|
|
|
|
else
|
|
|
|
conn_flag(c, conn, ACK_TO_TAP_DUE);
|
|
|
|
}
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (th->fin)
|
|
|
|
conn_flag(c, conn, ACK_FROM_TAP_DUE);
|
2021-10-05 17:46:59 +00:00
|
|
|
|
|
|
|
/* RFC 793, 3.1: "[...] and the first data octet is ISN+1." */
|
|
|
|
if (th->fin || th->syn)
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
conn->seq_to_tap++;
|
2021-10-05 17:46:59 +00:00
|
|
|
|
2024-06-13 12:36:48 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* tcp_send_flag() - Send segment with flags to tap (no payload)
|
|
|
|
* @c: Execution context
|
|
|
|
* @conn: Connection pointer
|
|
|
|
* @flags: TCP flags: if not set, send segment only if ACK is due
|
|
|
|
*
|
|
|
|
* Return: negative error code on connection reset, 0 otherwise
|
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
static int tcp_send_flag(const struct ctx *c, struct tcp_tap_conn *conn,
|
|
|
|
int flags)
|
2024-06-13 12:36:48 +00:00
|
|
|
{
|
2024-06-13 12:36:49 +00:00
|
|
|
return tcp_buf_send_flag(c, conn, flags);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
/**
|
2022-03-15 00:07:02 +00:00
|
|
|
* tcp_rst_do() - Reset a tap connection: send RST segment to tap, close socket
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* @c: Execution context
|
|
|
|
* @conn: Connection pointer
|
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
void tcp_rst_do(const struct ctx *c, struct tcp_tap_conn *conn)
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
{
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (conn->events == CLOSED)
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
return;
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (!tcp_send_flag(c, conn, RST))
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
conn_event(c, conn, CLOSED);
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
}
|
|
|
|
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
/**
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
* tcp_get_tap_ws() - Get Window Scaling option for connection from tap/guest
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* @conn: Connection pointer
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
* @opts: Pointer to start of TCP options
|
|
|
|
* @optlen: Bytes in options: caller MUST ensure available length
|
|
|
|
*/
|
2022-11-17 05:58:43 +00:00
|
|
|
static void tcp_get_tap_ws(struct tcp_tap_conn *conn,
|
2022-03-26 06:23:21 +00:00
|
|
|
const char *opts, size_t optlen)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
int ws = tcp_opt_get(opts, optlen, OPT_WS, NULL, NULL);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (ws >= 0 && ws <= TCP_WS_MAX)
|
|
|
|
conn->ws_from_tap = ws;
|
|
|
|
else
|
|
|
|
conn->ws_from_tap = 0;
|
|
|
|
}
|
passt: Assorted fixes from "fresh eyes" review
A bunch of fixes not worth single commits at this stage, notably:
- make buffer, length parameter ordering consistent in ARP, DHCP,
NDP handlers
- strict checking of buffer, message and option length in DHCP
handler (a malicious client could have easily crashed it)
- set up forwarding for IPv4 and IPv6, and masquerading with nft for
IPv4, from demo script
- get rid of separate slow and fast timers, we don't save any
overhead that way
- stricter checking of buffer lengths as passed to tap handlers
- proper dequeuing from qemu socket back-end: I accidentally trashed
messages that were bundled up together in a single tap read
operation -- the length header tells us what's the size of the next
frame, but there's no apparent limit to the number of messages we
get with one single receive
- rework some bits of the TCP state machine, now passive and active
connection closes appear to be robust -- introduce a new
FIN_WAIT_1_SOCK_FIN state indicating a FIN_WAIT_1 with a FIN flag
from socket
- streamline TCP option parsing routine
- track TCP state changes to stderr (this is temporary, proper
debugging and syslogging support pending)
- observe that multiplying a number by four might very well change
its value, and this happens to be the case for the data offset
from the TCP header as we check if it's the same as the total
length to find out if it's a duplicated ACK segment
- recent estimates suggest that the duration of a millisecond is
closer to a million nanoseconds than a thousand of them, this
trend is now reflected into the timespec_diff_ms() convenience
routine
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-21 10:33:38 +00:00
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
/**
|
2023-11-09 09:53:59 +00:00
|
|
|
* tcp_tap_window_update() - Process an updated window from tap side
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
* @conn: Connection pointer
|
|
|
|
* @window: Window value, host order, unscaled
|
|
|
|
*/
|
tcp: Don't use TCP_WINDOW_CLAMP
On the L2 tap side, we see TCP headers and know the TCP window that the
ultimate receiver is advertising. In order to avoid unnecessary buffering
within passt/pasta (or by the kernel on passt/pasta's behalf) we attempt
to advertise that window back to the original sock-side sender using
TCP_WINDOW_CLAMP.
However, TCP_WINDOW_CLAMP just doesn't work like this. Prior to kernel
commit 3aa7857fe1d7 ("tcp: enable mid stream window clamp"), it simply
had no effect on established sockets. After that commit, it does affect
established sockets but doesn't behave the way we need:
* It appears to be designed only to shrink the window, not to allow it to
re-expand.
* More importantly, that commit has a serious bug where if the
setsockopt() is made when the existing kernel advertised window for the
socket happens to be zero, it will now become locked at zero, stopping
any further data from being received on the socket.
Since this has never worked as intended, simply remove it. It might be
possible to re-implement the intended behaviour by manipulating SO_RCVBUF,
so we leave a comment to that effect.
This kernel bug is the underlying cause of both the linked passt bug and
the linked podman bug. We attempted to fix this before with passt commit
d3192f67 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag").
However while that commit masked the bug for some cases, it didn't really
address the problem.
Fixes: d3192f67c492 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag")
Link: https://github.com/containers/podman/issues/20170
Link: https://bugs.passt.top/show_bug.cgi?id=74
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-11-09 09:54:00 +00:00
|
|
|
static void tcp_tap_window_update(struct tcp_tap_conn *conn, unsigned wnd)
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
{
|
2023-11-09 09:53:59 +00:00
|
|
|
wnd = MIN(MAX_WINDOW, wnd << conn->ws_from_tap);
|
tcp: handle shrunk window advertisements from guest
A bug in kernel TCP may lead to a deadlock where a zero window is sent
from the guest peer, while it is unable to send out window updates even
after socket reads have freed up enough buffer space to permit a larger
window. In this situation, new window advertisements from the peer can
only be triggered by data packets arriving from this side.
However, currently such packets are never sent, because the zero-window
condition prevents this side from sending out any packets whatsoever
to the peer.
We notice that the above bug is triggered *only* after the peer has
dropped one or more arriving packets because of severe memory squeeze,
and that we hence always enter a retransmission situation when this
occurs. This also means that the implementation goes against the
RFC-9293 recommendation that a previously advertised window never
should shrink.
RFC-9293 seems to permit that we can continue sending up to the right
edge of the last advertised non-zero window in such situations, so that
is what we do to resolve this situation.
It turns out that this solution is extremely simple to implememt in the
code: We just omit to save the advertised zero-window when we see that
it has shrunk, i.e., if the acknowledged sequence number in the
advertisement message is lower than that of the last data byte sent
from our side.
When that is the case, the following happens:
- The 'retr' flag in tcp_data_from_tap() will be 'false', so no
retransmission will occur at this occasion.
- The data stream will soon reach the right edge of the previously
advertised window. In fact, in all observed cases we have seen that
it is already there when the zero-advertisement arrives.
- At that moment, the flags STALLED and ACK_FROM_TAP_DUE will be set,
unless they already have been, meaning that only the next timer
expiration will open for data retransmission or transmission.
- When that happens, the memory squeeze at the guest will normally have
abated, and the data flow can resume.
It should be noted that although this solves the problem we have at
hand, it is a work-around, and not a genuine solution to the described
kernel bug.
Suggested-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Jon Maloy <jmaloy@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Minor fix in commit title and commit reference in comment
to workaround
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-07-12 19:04:50 +00:00
|
|
|
|
|
|
|
/* Work-around for bug introduced in peer kernel code, commit
|
|
|
|
* e2142825c120 ("net: tcp: send zero-window ACK when no memory").
|
|
|
|
* We don't update if window shrank to zero.
|
|
|
|
*/
|
|
|
|
if (!wnd && SEQ_LT(conn->seq_ack_from_tap, conn->seq_to_tap))
|
|
|
|
return;
|
|
|
|
|
2023-11-09 09:53:59 +00:00
|
|
|
conn->wnd_from_tap = MIN(wnd >> conn->ws_from_tap, USHRT_MAX);
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
|
tcp: Don't use TCP_WINDOW_CLAMP
On the L2 tap side, we see TCP headers and know the TCP window that the
ultimate receiver is advertising. In order to avoid unnecessary buffering
within passt/pasta (or by the kernel on passt/pasta's behalf) we attempt
to advertise that window back to the original sock-side sender using
TCP_WINDOW_CLAMP.
However, TCP_WINDOW_CLAMP just doesn't work like this. Prior to kernel
commit 3aa7857fe1d7 ("tcp: enable mid stream window clamp"), it simply
had no effect on established sockets. After that commit, it does affect
established sockets but doesn't behave the way we need:
* It appears to be designed only to shrink the window, not to allow it to
re-expand.
* More importantly, that commit has a serious bug where if the
setsockopt() is made when the existing kernel advertised window for the
socket happens to be zero, it will now become locked at zero, stopping
any further data from being received on the socket.
Since this has never worked as intended, simply remove it. It might be
possible to re-implement the intended behaviour by manipulating SO_RCVBUF,
so we leave a comment to that effect.
This kernel bug is the underlying cause of both the linked passt bug and
the linked podman bug. We attempted to fix this before with passt commit
d3192f67 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag").
However while that commit masked the bug for some cases, it didn't really
address the problem.
Fixes: d3192f67c492 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag")
Link: https://github.com/containers/podman/issues/20170
Link: https://bugs.passt.top/show_bug.cgi?id=74
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-11-09 09:54:00 +00:00
|
|
|
/* FIXME: reflect the tap-side receiver's window back to the sock-side
|
|
|
|
* sender by adjusting SO_RCVBUF? */
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
2021-03-17 09:57:36 +00:00
|
|
|
/**
|
2024-07-18 05:26:36 +00:00
|
|
|
* tcp_init_seq() - Calculate initial sequence number according to RFC 6528
|
|
|
|
* @hash: Hash of connection details
|
udp: Connection tracking for ephemeral, local ports, and related fixes
As we support UDP forwarding for packets that are sent to local
ports, we actually need some kind of connection tracking for UDP.
While at it, this commit introduces a number of vaguely related fixes
for issues observed while trying this out. In detail:
- implement an explicit, albeit minimalistic, connection tracking
for UDP, to allow usage of ephemeral ports by the guest and by
the host at the same time, by binding them dynamically as needed,
and to allow mapping address changes for packets with a loopback
address as destination
- set the guest MAC address whenever we receive a packet from tap
instead of waiting for an ARP request, and set it to broadcast on
start, otherwise DHCPv6 might not work if all DHCPv6 requests time
out before the guest starts talking IPv4
- split context IPv6 address into address we assign, global or site
address seen on tap, and link-local address seen on tap, and make
sure we use the addresses we've seen as destination (link-local
choice depends on source address). Similarly, for IPv4, split into
address we assign and address we observe, and use the address we
observe as destination
- introduce a clock_gettime() syscall right after epoll_wait() wakes
up, so that we can remove all the other ones and pass the current
timestamp to tap and socket handlers -- this is additionally needed
by UDP to time out bindings to ephemeral ports and mappings between
loopback address and a local address
- rename sock_l4_add() to sock_l4(), no semantic changes intended
- include <arpa/inet.h> in passt.c before kernel headers so that we
can use <netinet/in.h> macros to check IPv6 address types, and
remove a duplicate <linux/ip.h> inclusion
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-04-29 14:59:20 +00:00
|
|
|
* @now: Current timestamp
|
2021-03-17 09:57:36 +00:00
|
|
|
*/
|
2024-07-18 05:26:36 +00:00
|
|
|
static uint32_t tcp_init_seq(uint64_t hash, const struct timespec *now)
|
2021-03-17 09:57:36 +00:00
|
|
|
{
|
2022-11-17 05:59:01 +00:00
|
|
|
/* 32ns ticks, overflows 32 bits every 137s */
|
2024-07-18 05:26:36 +00:00
|
|
|
uint32_t ns = (now->tv_sec * 1000000000 + now->tv_nsec) >> 5;
|
2021-03-17 09:57:36 +00:00
|
|
|
|
2024-07-18 05:26:36 +00:00
|
|
|
return ((uint32_t)(hash >> 32) ^ (uint32_t)hash) + ns;
|
2021-03-17 09:57:36 +00:00
|
|
|
}
|
|
|
|
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
/**
|
2023-02-13 23:48:22 +00:00
|
|
|
* tcp_conn_pool_sock() - Get socket for new connection from pre-opened pool
|
|
|
|
* @pool: Pool of pre-opened sockets
|
2022-03-15 00:07:02 +00:00
|
|
|
*
|
2023-02-13 23:48:22 +00:00
|
|
|
* Return: socket number if available, negative code if pool is empty
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*/
|
2023-02-13 23:48:22 +00:00
|
|
|
int tcp_conn_pool_sock(int pool[])
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
2023-02-13 23:48:22 +00:00
|
|
|
int s = -1, i;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2023-02-13 23:48:22 +00:00
|
|
|
for (i = 0; i < TCP_SOCK_POOL_SIZE; i++) {
|
|
|
|
SWAP(s, pool[i]);
|
2022-03-25 10:24:23 +00:00
|
|
|
if (s >= 0)
|
2023-02-13 23:48:22 +00:00
|
|
|
return s;
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
}
|
2023-02-13 23:48:22 +00:00
|
|
|
return -1;
|
|
|
|
}
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2023-02-13 23:48:22 +00:00
|
|
|
/**
|
|
|
|
* tcp_conn_new_sock() - Open and prepare new socket for connection
|
|
|
|
* @c: Execution context
|
|
|
|
* @af: Address family
|
|
|
|
*
|
|
|
|
* Return: socket number on success, negative code if socket creation failed
|
|
|
|
*/
|
2024-02-19 07:56:50 +00:00
|
|
|
static int tcp_conn_new_sock(const struct ctx *c, sa_family_t af)
|
2023-02-13 23:48:22 +00:00
|
|
|
{
|
|
|
|
int s;
|
|
|
|
|
tap, tcp, util: Add some missing SOCK_CLOEXEC flags
I have no idea why, but these are reported by clang-tidy (19.2.1) on
Alpine (x86) only:
/home/sbrivio/passt/tap.c:1139:38: error: 'socket' should use SOCK_CLOEXEC where possible [android-cloexec-socket,-warnings-as-errors]
1139 | int fd = socket(AF_UNIX, SOCK_STREAM, 0);
| ^
| | SOCK_CLOEXEC
/home/sbrivio/passt/tap.c:1158:51: error: 'socket' should use SOCK_CLOEXEC where possible [android-cloexec-socket,-warnings-as-errors]
1158 | ex = socket(AF_UNIX, SOCK_STREAM | SOCK_NONBLOCK, 0);
| ^
| | SOCK_CLOEXEC
/home/sbrivio/passt/tcp.c:1413:44: error: 'socket' should use SOCK_CLOEXEC where possible [android-cloexec-socket,-warnings-as-errors]
1413 | s = socket(af, SOCK_STREAM | SOCK_NONBLOCK, IPPROTO_TCP);
| ^
| | SOCK_CLOEXEC
/home/sbrivio/passt/util.c:188:38: error: 'socket' should use SOCK_CLOEXEC where possible [android-cloexec-socket,-warnings-as-errors]
188 | if ((s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) {
| ^
| | SOCK_CLOEXEC
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2024-11-07 18:40:37 +00:00
|
|
|
s = socket(af, SOCK_STREAM | SOCK_NONBLOCK | SOCK_CLOEXEC, IPPROTO_TCP);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2023-08-11 05:12:21 +00:00
|
|
|
if (s > FD_REF_MAX) {
|
2022-03-15 22:17:44 +00:00
|
|
|
close(s);
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
if (s < 0)
|
2022-03-15 00:07:02 +00:00
|
|
|
return -errno;
|
2021-05-21 09:14:50 +00:00
|
|
|
|
tcp: Probe net.core.{r,w}mem_max, don't set SO_{RCV,SND}BUF if low
If net.core.rmem_max and net.core.wmem_max sysctls have low values,
we can get bigger buffers by not trying to set them high -- the
kernel would lock their values to what we get.
Try, instead, to get bigger buffers by queueing as much as possible,
and if maximum values in tcp_wmem and tcp_rmem are bigger than this,
that will work.
While at it, drop QUICKACK option for non-spliced sockets, I set
that earlier by mistake.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-10-04 20:08:24 +00:00
|
|
|
tcp_sock_set_bufsize(c, s);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
return s;
|
|
|
|
}
|
2021-08-28 01:19:25 +00:00
|
|
|
|
2024-02-19 07:56:50 +00:00
|
|
|
/**
|
|
|
|
* tcp_conn_sock() - Obtain a connectable socket in the host/init namespace
|
|
|
|
* @c: Execution context
|
|
|
|
* @af: Address family (AF_INET or AF_INET6)
|
|
|
|
*
|
|
|
|
* Return: Socket fd on success, -errno on failure
|
|
|
|
*/
|
|
|
|
int tcp_conn_sock(const struct ctx *c, sa_family_t af)
|
|
|
|
{
|
|
|
|
int *pool = af == AF_INET6 ? init_sock_pool6 : init_sock_pool4;
|
|
|
|
int s;
|
|
|
|
|
|
|
|
if ((s = tcp_conn_pool_sock(pool)) >= 0)
|
|
|
|
return s;
|
|
|
|
|
|
|
|
/* If the pool is empty we just open a new one without refilling the
|
|
|
|
* pool to keep latency down.
|
|
|
|
*/
|
|
|
|
if ((s = tcp_conn_new_sock(c, af)) >= 0)
|
|
|
|
return s;
|
|
|
|
|
|
|
|
err("TCP: Unable to open socket for new connection: %s",
|
|
|
|
strerror(-s));
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
/**
|
2022-03-26 06:23:21 +00:00
|
|
|
* tcp_conn_tap_mss() - Get MSS value advertised by tap/guest
|
2022-03-15 00:07:02 +00:00
|
|
|
* @conn: Connection pointer
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
* @opts: Pointer to start of TCP options
|
|
|
|
* @optlen: Bytes in options: caller MUST ensure available length
|
2022-03-15 00:07:02 +00:00
|
|
|
*
|
|
|
|
* Return: clamped MSS value
|
|
|
|
*/
|
tcp: Clamp MSS value when queueing data to tap, also for pasta
Tom reports that a pattern of repated ~1 MiB chunks downloads over
NNTP over TLS, on Podman 4.4 using pasta as network back-end, results
in pasta taking one full CPU thread after a while, and the download
never succeeds.
On that setup, we end up re-sending the same frame over and over,
with a consistent 65 534 bytes size, and never get an
acknowledgement from the tap-side client. This only happens for the
default MTU value (65 520 bytes) or for values that are slightly
smaller than that (down to 64 499 bytes).
We hit this condition because the MSS value we use in
tcp_data_from_sock(), only in pasta mode, is simply clamped to
USHRT_MAX, and not to the actual size of the buffers we pre-cooked
for sending, which is a bit less than that.
It looks like we got away with it until commit 0fb7b2b9080a ("tap:
Use different io vector bases depending on tap type") fixed the
setting of iov_len.
Luckily, since it's pasta, we're queueing up to two frames at a time,
so the worst that can happen is a badly segmented TCP stream: we
always have some space at the tail of the buffer.
Clamp the MSS value to the appropriate maximum given by struct
tcp{4,6}_buf_data_t, no matter if we're running in pasta or passt
mode.
While at it, fix the comments to those structs to reflect the current
struct size. This is not really relevant for any further calculation
or consideration, but it's convenient to know while debugging this
kind of issues.
Thanks to Tom for reporting the issue in a very detailed way and for
providing a test setup.
Reported-by: Tom Mombourquette <tom@devnode.com>
Link: https://github.com/containers/podman/issues/17703
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 17:07:42 +00:00
|
|
|
static uint16_t tcp_conn_tap_mss(const struct tcp_tap_conn *conn,
|
2022-03-26 06:23:21 +00:00
|
|
|
const char *opts, size_t optlen)
|
2022-03-15 00:07:02 +00:00
|
|
|
{
|
|
|
|
unsigned int mss;
|
|
|
|
int ret;
|
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if ((ret = tcp_opt_get(opts, optlen, OPT_MSS, NULL, NULL)) < 0)
|
2022-03-15 00:07:02 +00:00
|
|
|
mss = MSS_DEFAULT;
|
|
|
|
else
|
|
|
|
mss = ret;
|
|
|
|
|
tcp: Clamp MSS value when queueing data to tap, also for pasta
Tom reports that a pattern of repated ~1 MiB chunks downloads over
NNTP over TLS, on Podman 4.4 using pasta as network back-end, results
in pasta taking one full CPU thread after a while, and the download
never succeeds.
On that setup, we end up re-sending the same frame over and over,
with a consistent 65 534 bytes size, and never get an
acknowledgement from the tap-side client. This only happens for the
default MTU value (65 520 bytes) or for values that are slightly
smaller than that (down to 64 499 bytes).
We hit this condition because the MSS value we use in
tcp_data_from_sock(), only in pasta mode, is simply clamped to
USHRT_MAX, and not to the actual size of the buffers we pre-cooked
for sending, which is a bit less than that.
It looks like we got away with it until commit 0fb7b2b9080a ("tap:
Use different io vector bases depending on tap type") fixed the
setting of iov_len.
Luckily, since it's pasta, we're queueing up to two frames at a time,
so the worst that can happen is a badly segmented TCP stream: we
always have some space at the tail of the buffer.
Clamp the MSS value to the appropriate maximum given by struct
tcp{4,6}_buf_data_t, no matter if we're running in pasta or passt
mode.
While at it, fix the comments to those structs to reflect the current
struct size. This is not really relevant for any further calculation
or consideration, but it's convenient to know while debugging this
kind of issues.
Thanks to Tom for reporting the issue in a very detailed way and for
providing a test setup.
Reported-by: Tom Mombourquette <tom@devnode.com>
Link: https://github.com/containers/podman/issues/17703
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 17:07:42 +00:00
|
|
|
if (CONN_V4(conn))
|
|
|
|
mss = MIN(MSS4, mss);
|
|
|
|
else
|
|
|
|
mss = MIN(MSS6, mss);
|
2022-03-15 00:07:02 +00:00
|
|
|
|
|
|
|
return MIN(mss, USHRT_MAX);
|
|
|
|
}
|
|
|
|
|
conf, icmp, tcp, udp: Add options to bind to outbound address and interface
I didn't notice earlier: libslirp (and slirp4netns) supports binding
outbound sockets to specific IPv4 and IPv6 addresses, to force the
source addresse selection. If we want to claim feature parity, we
should implement that as well.
Further, Podman supports specifying outbound interfaces as well, but
this is simply done by resolving the primary address for an interface
when the network back-end is started. However, since kernel version
5.7, commit c427bfec18f2 ("net: core: enable SO_BINDTODEVICE for
non-root users"), we can actually bind to a specific interface name,
which doesn't need to be validated in advance.
Implement -o / --outbound ADDR to bind to IPv4 and IPv6 addresses,
and --outbound-if4 and --outbound-if6 to bind IPv4 and IPv6 sockets
to given interfaces.
Given that it probably makes little sense to select addresses and
routes from interfaces different than the ones given for outbound
sockets, also assign those as "template" interfaces, by default,
unless explicitly overridden by '-i'.
For ICMP and UDP, we call sock_l4() to open outbound sockets, as we
already needed to bind to given ports or echo identifiers, and we
can bind() a socket only once: there, pass address (if any) and
interface (if any) for the existing bind() and setsockopt() calls.
For TCP, in general, we wouldn't otherwise bind sockets. Add a
specific helper to do that.
For UDP outbound sockets, we need to know if the final destination
of the socket is a loopback address, before we decide whether it
makes sense to bind the socket at all: move the block mangling the
address destination before the creation of the socket in the IPv4
path. This was already the case for the IPv6 path.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 02:29:51 +00:00
|
|
|
/**
|
|
|
|
* tcp_bind_outbound() - Bind socket to outbound address and interface if given
|
|
|
|
* @c: Execution context
|
2024-07-18 05:26:31 +00:00
|
|
|
* @conn: Connection entry for socket to bind
|
conf, icmp, tcp, udp: Add options to bind to outbound address and interface
I didn't notice earlier: libslirp (and slirp4netns) supports binding
outbound sockets to specific IPv4 and IPv6 addresses, to force the
source addresse selection. If we want to claim feature parity, we
should implement that as well.
Further, Podman supports specifying outbound interfaces as well, but
this is simply done by resolving the primary address for an interface
when the network back-end is started. However, since kernel version
5.7, commit c427bfec18f2 ("net: core: enable SO_BINDTODEVICE for
non-root users"), we can actually bind to a specific interface name,
which doesn't need to be validated in advance.
Implement -o / --outbound ADDR to bind to IPv4 and IPv6 addresses,
and --outbound-if4 and --outbound-if6 to bind IPv4 and IPv6 sockets
to given interfaces.
Given that it probably makes little sense to select addresses and
routes from interfaces different than the ones given for outbound
sockets, also assign those as "template" interfaces, by default,
unless explicitly overridden by '-i'.
For ICMP and UDP, we call sock_l4() to open outbound sockets, as we
already needed to bind to given ports or echo identifiers, and we
can bind() a socket only once: there, pass address (if any) and
interface (if any) for the existing bind() and setsockopt() calls.
For TCP, in general, we wouldn't otherwise bind sockets. Add a
specific helper to do that.
For UDP outbound sockets, we need to know if the final destination
of the socket is a loopback address, before we decide whether it
makes sense to bind the socket at all: move the block mangling the
address destination before the creation of the socket in the IPv4
path. This was already the case for the IPv6 path.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 02:29:51 +00:00
|
|
|
* @s: Outbound TCP socket
|
|
|
|
*/
|
2024-07-18 05:26:31 +00:00
|
|
|
static void tcp_bind_outbound(const struct ctx *c,
|
|
|
|
const struct tcp_tap_conn *conn, int s)
|
conf, icmp, tcp, udp: Add options to bind to outbound address and interface
I didn't notice earlier: libslirp (and slirp4netns) supports binding
outbound sockets to specific IPv4 and IPv6 addresses, to force the
source addresse selection. If we want to claim feature parity, we
should implement that as well.
Further, Podman supports specifying outbound interfaces as well, but
this is simply done by resolving the primary address for an interface
when the network back-end is started. However, since kernel version
5.7, commit c427bfec18f2 ("net: core: enable SO_BINDTODEVICE for
non-root users"), we can actually bind to a specific interface name,
which doesn't need to be validated in advance.
Implement -o / --outbound ADDR to bind to IPv4 and IPv6 addresses,
and --outbound-if4 and --outbound-if6 to bind IPv4 and IPv6 sockets
to given interfaces.
Given that it probably makes little sense to select addresses and
routes from interfaces different than the ones given for outbound
sockets, also assign those as "template" interfaces, by default,
unless explicitly overridden by '-i'.
For ICMP and UDP, we call sock_l4() to open outbound sockets, as we
already needed to bind to given ports or echo identifiers, and we
can bind() a socket only once: there, pass address (if any) and
interface (if any) for the existing bind() and setsockopt() calls.
For TCP, in general, we wouldn't otherwise bind sockets. Add a
specific helper to do that.
For UDP outbound sockets, we need to know if the final destination
of the socket is a loopback address, before we decide whether it
makes sense to bind the socket at all: move the block mangling the
address destination before the creation of the socket in the IPv4
path. This was already the case for the IPv6 path.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 02:29:51 +00:00
|
|
|
{
|
2024-07-18 05:26:31 +00:00
|
|
|
const struct flowside *tgt = &conn->f.side[TGTSIDE];
|
|
|
|
union sockaddr_inany bind_sa;
|
|
|
|
socklen_t sl;
|
|
|
|
|
|
|
|
|
2024-08-21 04:19:57 +00:00
|
|
|
pif_sockaddr(c, &bind_sa, &sl, PIF_HOST, &tgt->oaddr, tgt->oport);
|
|
|
|
if (!inany_is_unspecified(&tgt->oaddr) || tgt->oport) {
|
2024-07-18 05:26:31 +00:00
|
|
|
if (bind(s, &bind_sa.sa, sl)) {
|
|
|
|
char sstr[INANY_ADDRSTRLEN];
|
|
|
|
|
|
|
|
flow_dbg(conn,
|
|
|
|
"Can't bind TCP outbound socket to %s:%hu: %s",
|
2024-08-21 04:19:57 +00:00
|
|
|
inany_ntop(&tgt->oaddr, sstr, sizeof(sstr)),
|
|
|
|
tgt->oport, strerror(errno));
|
conf, icmp, tcp, udp: Add options to bind to outbound address and interface
I didn't notice earlier: libslirp (and slirp4netns) supports binding
outbound sockets to specific IPv4 and IPv6 addresses, to force the
source addresse selection. If we want to claim feature parity, we
should implement that as well.
Further, Podman supports specifying outbound interfaces as well, but
this is simply done by resolving the primary address for an interface
when the network back-end is started. However, since kernel version
5.7, commit c427bfec18f2 ("net: core: enable SO_BINDTODEVICE for
non-root users"), we can actually bind to a specific interface name,
which doesn't need to be validated in advance.
Implement -o / --outbound ADDR to bind to IPv4 and IPv6 addresses,
and --outbound-if4 and --outbound-if6 to bind IPv4 and IPv6 sockets
to given interfaces.
Given that it probably makes little sense to select addresses and
routes from interfaces different than the ones given for outbound
sockets, also assign those as "template" interfaces, by default,
unless explicitly overridden by '-i'.
For ICMP and UDP, we call sock_l4() to open outbound sockets, as we
already needed to bind to given ports or echo identifiers, and we
can bind() a socket only once: there, pass address (if any) and
interface (if any) for the existing bind() and setsockopt() calls.
For TCP, in general, we wouldn't otherwise bind sockets. Add a
specific helper to do that.
For UDP outbound sockets, we need to know if the final destination
of the socket is a loopback address, before we decide whether it
makes sense to bind the socket at all: move the block mangling the
address destination before the creation of the socket in the IPv4
path. This was already the case for the IPv6 path.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 02:29:51 +00:00
|
|
|
}
|
2024-07-18 05:26:31 +00:00
|
|
|
}
|
conf, icmp, tcp, udp: Add options to bind to outbound address and interface
I didn't notice earlier: libslirp (and slirp4netns) supports binding
outbound sockets to specific IPv4 and IPv6 addresses, to force the
source addresse selection. If we want to claim feature parity, we
should implement that as well.
Further, Podman supports specifying outbound interfaces as well, but
this is simply done by resolving the primary address for an interface
when the network back-end is started. However, since kernel version
5.7, commit c427bfec18f2 ("net: core: enable SO_BINDTODEVICE for
non-root users"), we can actually bind to a specific interface name,
which doesn't need to be validated in advance.
Implement -o / --outbound ADDR to bind to IPv4 and IPv6 addresses,
and --outbound-if4 and --outbound-if6 to bind IPv4 and IPv6 sockets
to given interfaces.
Given that it probably makes little sense to select addresses and
routes from interfaces different than the ones given for outbound
sockets, also assign those as "template" interfaces, by default,
unless explicitly overridden by '-i'.
For ICMP and UDP, we call sock_l4() to open outbound sockets, as we
already needed to bind to given ports or echo identifiers, and we
can bind() a socket only once: there, pass address (if any) and
interface (if any) for the existing bind() and setsockopt() calls.
For TCP, in general, we wouldn't otherwise bind sockets. Add a
specific helper to do that.
For UDP outbound sockets, we need to know if the final destination
of the socket is a loopback address, before we decide whether it
makes sense to bind the socket at all: move the block mangling the
address destination before the creation of the socket in the IPv4
path. This was already the case for the IPv6 path.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 02:29:51 +00:00
|
|
|
|
2024-07-18 05:26:31 +00:00
|
|
|
if (bind_sa.sa_family == AF_INET) {
|
conf, icmp, tcp, udp: Add options to bind to outbound address and interface
I didn't notice earlier: libslirp (and slirp4netns) supports binding
outbound sockets to specific IPv4 and IPv6 addresses, to force the
source addresse selection. If we want to claim feature parity, we
should implement that as well.
Further, Podman supports specifying outbound interfaces as well, but
this is simply done by resolving the primary address for an interface
when the network back-end is started. However, since kernel version
5.7, commit c427bfec18f2 ("net: core: enable SO_BINDTODEVICE for
non-root users"), we can actually bind to a specific interface name,
which doesn't need to be validated in advance.
Implement -o / --outbound ADDR to bind to IPv4 and IPv6 addresses,
and --outbound-if4 and --outbound-if6 to bind IPv4 and IPv6 sockets
to given interfaces.
Given that it probably makes little sense to select addresses and
routes from interfaces different than the ones given for outbound
sockets, also assign those as "template" interfaces, by default,
unless explicitly overridden by '-i'.
For ICMP and UDP, we call sock_l4() to open outbound sockets, as we
already needed to bind to given ports or echo identifiers, and we
can bind() a socket only once: there, pass address (if any) and
interface (if any) for the existing bind() and setsockopt() calls.
For TCP, in general, we wouldn't otherwise bind sockets. Add a
specific helper to do that.
For UDP outbound sockets, we need to know if the final destination
of the socket is a loopback address, before we decide whether it
makes sense to bind the socket at all: move the block mangling the
address destination before the creation of the socket in the IPv4
path. This was already the case for the IPv6 path.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 02:29:51 +00:00
|
|
|
if (*c->ip4.ifname_out) {
|
|
|
|
if (setsockopt(s, SOL_SOCKET, SO_BINDTODEVICE,
|
|
|
|
c->ip4.ifname_out,
|
2024-07-18 05:26:31 +00:00
|
|
|
strlen(c->ip4.ifname_out))) {
|
|
|
|
flow_dbg(conn, "Can't bind IPv4 TCP socket to"
|
|
|
|
" interface %s: %s", c->ip4.ifname_out,
|
|
|
|
strerror(errno));
|
|
|
|
}
|
conf, icmp, tcp, udp: Add options to bind to outbound address and interface
I didn't notice earlier: libslirp (and slirp4netns) supports binding
outbound sockets to specific IPv4 and IPv6 addresses, to force the
source addresse selection. If we want to claim feature parity, we
should implement that as well.
Further, Podman supports specifying outbound interfaces as well, but
this is simply done by resolving the primary address for an interface
when the network back-end is started. However, since kernel version
5.7, commit c427bfec18f2 ("net: core: enable SO_BINDTODEVICE for
non-root users"), we can actually bind to a specific interface name,
which doesn't need to be validated in advance.
Implement -o / --outbound ADDR to bind to IPv4 and IPv6 addresses,
and --outbound-if4 and --outbound-if6 to bind IPv4 and IPv6 sockets
to given interfaces.
Given that it probably makes little sense to select addresses and
routes from interfaces different than the ones given for outbound
sockets, also assign those as "template" interfaces, by default,
unless explicitly overridden by '-i'.
For ICMP and UDP, we call sock_l4() to open outbound sockets, as we
already needed to bind to given ports or echo identifiers, and we
can bind() a socket only once: there, pass address (if any) and
interface (if any) for the existing bind() and setsockopt() calls.
For TCP, in general, we wouldn't otherwise bind sockets. Add a
specific helper to do that.
For UDP outbound sockets, we need to know if the final destination
of the socket is a loopback address, before we decide whether it
makes sense to bind the socket at all: move the block mangling the
address destination before the creation of the socket in the IPv4
path. This was already the case for the IPv6 path.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 02:29:51 +00:00
|
|
|
}
|
2024-07-18 05:26:31 +00:00
|
|
|
} else if (bind_sa.sa_family == AF_INET6) {
|
conf, icmp, tcp, udp: Add options to bind to outbound address and interface
I didn't notice earlier: libslirp (and slirp4netns) supports binding
outbound sockets to specific IPv4 and IPv6 addresses, to force the
source addresse selection. If we want to claim feature parity, we
should implement that as well.
Further, Podman supports specifying outbound interfaces as well, but
this is simply done by resolving the primary address for an interface
when the network back-end is started. However, since kernel version
5.7, commit c427bfec18f2 ("net: core: enable SO_BINDTODEVICE for
non-root users"), we can actually bind to a specific interface name,
which doesn't need to be validated in advance.
Implement -o / --outbound ADDR to bind to IPv4 and IPv6 addresses,
and --outbound-if4 and --outbound-if6 to bind IPv4 and IPv6 sockets
to given interfaces.
Given that it probably makes little sense to select addresses and
routes from interfaces different than the ones given for outbound
sockets, also assign those as "template" interfaces, by default,
unless explicitly overridden by '-i'.
For ICMP and UDP, we call sock_l4() to open outbound sockets, as we
already needed to bind to given ports or echo identifiers, and we
can bind() a socket only once: there, pass address (if any) and
interface (if any) for the existing bind() and setsockopt() calls.
For TCP, in general, we wouldn't otherwise bind sockets. Add a
specific helper to do that.
For UDP outbound sockets, we need to know if the final destination
of the socket is a loopback address, before we decide whether it
makes sense to bind the socket at all: move the block mangling the
address destination before the creation of the socket in the IPv4
path. This was already the case for the IPv6 path.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 02:29:51 +00:00
|
|
|
if (*c->ip6.ifname_out) {
|
|
|
|
if (setsockopt(s, SOL_SOCKET, SO_BINDTODEVICE,
|
|
|
|
c->ip6.ifname_out,
|
2024-07-18 05:26:31 +00:00
|
|
|
strlen(c->ip6.ifname_out))) {
|
|
|
|
flow_dbg(conn, "Can't bind IPv6 TCP socket to"
|
|
|
|
" interface %s: %s", c->ip6.ifname_out,
|
|
|
|
strerror(errno));
|
|
|
|
}
|
conf, icmp, tcp, udp: Add options to bind to outbound address and interface
I didn't notice earlier: libslirp (and slirp4netns) supports binding
outbound sockets to specific IPv4 and IPv6 addresses, to force the
source addresse selection. If we want to claim feature parity, we
should implement that as well.
Further, Podman supports specifying outbound interfaces as well, but
this is simply done by resolving the primary address for an interface
when the network back-end is started. However, since kernel version
5.7, commit c427bfec18f2 ("net: core: enable SO_BINDTODEVICE for
non-root users"), we can actually bind to a specific interface name,
which doesn't need to be validated in advance.
Implement -o / --outbound ADDR to bind to IPv4 and IPv6 addresses,
and --outbound-if4 and --outbound-if6 to bind IPv4 and IPv6 sockets
to given interfaces.
Given that it probably makes little sense to select addresses and
routes from interfaces different than the ones given for outbound
sockets, also assign those as "template" interfaces, by default,
unless explicitly overridden by '-i'.
For ICMP and UDP, we call sock_l4() to open outbound sockets, as we
already needed to bind to given ports or echo identifiers, and we
can bind() a socket only once: there, pass address (if any) and
interface (if any) for the existing bind() and setsockopt() calls.
For TCP, in general, we wouldn't otherwise bind sockets. Add a
specific helper to do that.
For UDP outbound sockets, we need to know if the final destination
of the socket is a loopback address, before we decide whether it
makes sense to bind the socket at all: move the block mangling the
address destination before the creation of the socket in the IPv4
path. This was already the case for the IPv6 path.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 02:29:51 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
/**
|
|
|
|
* tcp_conn_from_tap() - Handle connection request (SYN segment) from tap
|
|
|
|
* @c: Execution context
|
|
|
|
* @af: Address family, AF_INET or AF_INET6
|
2023-08-22 05:29:53 +00:00
|
|
|
* @saddr: Source address, pointer to in_addr or in6_addr
|
|
|
|
* @daddr: Destination address, pointer to in_addr or in6_addr
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
* @th: TCP header from tap: caller MUST ensure it's there
|
|
|
|
* @opts: Pointer to start of options
|
|
|
|
* @optlen: Bytes in options: caller MUST ensure available length
|
2022-03-15 00:07:02 +00:00
|
|
|
* @now: Current timestamp
|
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
static void tcp_conn_from_tap(const struct ctx *c, sa_family_t af,
|
2024-02-19 07:56:46 +00:00
|
|
|
const void *saddr, const void *daddr,
|
2022-03-26 06:23:21 +00:00
|
|
|
const struct tcphdr *th, const char *opts,
|
|
|
|
size_t optlen, const struct timespec *now)
|
2022-03-15 00:07:02 +00:00
|
|
|
{
|
tcp: Validate TCP endpoint addresses
TCP connections should typically not have wildcard addresses (0.0.0.0
or ::) nor a zero port number for either endpoint. It's not entirely
clear (at least to me) if it's strictly against the RFCs to do so, but
at any rate the socket interfaces often treat those values
specially[1], so it's not really possible to manipulate such
connections. Likewise they should not have broadcast or multicast
addresses for either endpoint.
However, nothing prevents a guest from creating a SYN packet with such
values, and it's not entirely clear what the effect on passt would be.
To ensure sane behaviour, explicitly check for this case and drop such
packets, logging a debug warning (we don't want a higher level,
because that would allow a guest to spam the logs).
We never expect such an address on an accept()ed socket either, but
just in case, check for it as well.
[1] Depending on context as "unknown", "match any" or "kernel, pick
something for me"
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-02-28 11:25:17 +00:00
|
|
|
in_port_t srcport = ntohs(th->source);
|
|
|
|
in_port_t dstport = ntohs(th->dest);
|
2024-07-18 05:26:28 +00:00
|
|
|
const struct flowside *ini, *tgt;
|
2022-11-17 05:58:43 +00:00
|
|
|
struct tcp_tap_conn *conn;
|
2024-07-18 05:26:28 +00:00
|
|
|
union sockaddr_inany sa;
|
2024-01-16 00:50:41 +00:00
|
|
|
union flow *flow;
|
tcp: Validate TCP endpoint addresses
TCP connections should typically not have wildcard addresses (0.0.0.0
or ::) nor a zero port number for either endpoint. It's not entirely
clear (at least to me) if it's strictly against the RFCs to do so, but
at any rate the socket interfaces often treat those values
specially[1], so it's not really possible to manipulate such
connections. Likewise they should not have broadcast or multicast
addresses for either endpoint.
However, nothing prevents a guest from creating a SYN packet with such
values, and it's not entirely clear what the effect on passt would be.
To ensure sane behaviour, explicitly check for this case and drop such
packets, logging a debug warning (we don't want a higher level,
because that would allow a guest to spam the logs).
We never expect such an address on an accept()ed socket either, but
just in case, check for it as well.
[1] Depending on context as "unknown", "match any" or "kernel, pick
something for me"
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-02-28 11:25:17 +00:00
|
|
|
int s = -1, mss;
|
2024-07-18 05:26:36 +00:00
|
|
|
uint64_t hash;
|
2022-03-15 00:07:02 +00:00
|
|
|
socklen_t sl;
|
2023-08-22 05:29:53 +00:00
|
|
|
|
2024-01-16 00:50:41 +00:00
|
|
|
if (!(flow = flow_alloc()))
|
2022-03-15 00:07:02 +00:00
|
|
|
return;
|
|
|
|
|
2024-07-18 05:26:28 +00:00
|
|
|
ini = flow_initiate_af(flow, PIF_TAP,
|
|
|
|
af, saddr, srcport, daddr, dstport);
|
|
|
|
|
2024-07-18 05:26:43 +00:00
|
|
|
if (!(tgt = flow_target(c, flow, IPPROTO_TCP)))
|
|
|
|
goto cancel;
|
2024-05-21 05:57:07 +00:00
|
|
|
|
2024-07-18 05:26:43 +00:00
|
|
|
if (flow->f.pif[TGTSIDE] != PIF_HOST) {
|
|
|
|
flow_err(flow, "No support for forwarding TCP from %s to %s",
|
|
|
|
pif_name(flow->f.pif[INISIDE]),
|
|
|
|
pif_name(flow->f.pif[TGTSIDE]));
|
|
|
|
goto cancel;
|
2024-07-18 05:26:31 +00:00
|
|
|
}
|
|
|
|
|
tcp: Don't rely on bind() to fail to decide that connection target is valid
Commit e1a2e2780c91 ("tcp: Check if connection is local or low RTT
was seen before using large MSS") added a call to bind() before we
issue a connect() to the target for an outbound connection.
If bind() fails, but neither with EADDRNOTAVAIL, nor with EACCESS, we
can conclude that the target address is a local (host) address, and we
can use an unlimited MSS.
While at it, according to the reasoning of that commit, if bind()
succeeds, we would know right away that nobody is listening at that
(local) address and port, and we don't even need to call connect(): we
can just fail early and reset the connection attempt.
But if non-local binds are enabled via net.ipv4.ip_nonlocal_bind or
net.ipv6.ip_nonlocal_bind sysctl, binding to a non-local address will
actually succeed, so we can't rely on it to fail in general.
The visible issue with the existing behaviour is that we would reset
any outbound connection to non-local addresses, if non-local binds are
enabled.
Keep the significant optimisation for local addresses along with the
bind() call, but if it succeeds, don't draw any conclusion: close the
socket, grab another one, and proceed normally.
This will incur a small latency penalty if non-local binds are
enabled (we'll likely fetch an existing socket from the pool but
additionally call close()), or if the target is local but not bound:
we'll need to call connect() and get a failure before relaying that
failure back.
Link: https://github.com/containers/podman/issues/23003
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2024-06-18 10:32:17 +00:00
|
|
|
conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp);
|
|
|
|
|
2024-07-18 05:26:32 +00:00
|
|
|
if (!inany_is_unicast(&ini->eaddr) || ini->eport == 0 ||
|
2024-08-21 04:19:57 +00:00
|
|
|
!inany_is_unicast(&ini->oaddr) || ini->oport == 0) {
|
2024-07-18 05:26:32 +00:00
|
|
|
char sstr[INANY_ADDRSTRLEN], dstr[INANY_ADDRSTRLEN];
|
|
|
|
|
|
|
|
debug("Invalid endpoint in TCP SYN: %s:%hu -> %s:%hu",
|
|
|
|
inany_ntop(&ini->eaddr, sstr, sizeof(sstr)), ini->eport,
|
2024-08-21 04:19:57 +00:00
|
|
|
inany_ntop(&ini->oaddr, dstr, sizeof(dstr)), ini->oport);
|
2024-07-18 05:26:32 +00:00
|
|
|
goto cancel;
|
tcp: Validate TCP endpoint addresses
TCP connections should typically not have wildcard addresses (0.0.0.0
or ::) nor a zero port number for either endpoint. It's not entirely
clear (at least to me) if it's strictly against the RFCs to do so, but
at any rate the socket interfaces often treat those values
specially[1], so it's not really possible to manipulate such
connections. Likewise they should not have broadcast or multicast
addresses for either endpoint.
However, nothing prevents a guest from creating a SYN packet with such
values, and it's not entirely clear what the effect on passt would be.
To ensure sane behaviour, explicitly check for this case and drop such
packets, logging a debug warning (we don't want a higher level,
because that would allow a guest to spam the logs).
We never expect such an address on an accept()ed socket either, but
just in case, check for it as well.
[1] Depending on context as "unknown", "match any" or "kernel, pick
something for me"
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-02-28 11:25:17 +00:00
|
|
|
}
|
|
|
|
|
2024-02-19 07:56:50 +00:00
|
|
|
if ((s = tcp_conn_sock(c, af)) < 0)
|
|
|
|
goto cancel;
|
2022-03-15 00:07:02 +00:00
|
|
|
|
2024-07-18 05:26:28 +00:00
|
|
|
pif_sockaddr(c, &sa, &sl, PIF_HOST, &tgt->eaddr, tgt->eport);
|
2022-03-15 00:07:02 +00:00
|
|
|
|
tcp: Don't rely on bind() to fail to decide that connection target is valid
Commit e1a2e2780c91 ("tcp: Check if connection is local or low RTT
was seen before using large MSS") added a call to bind() before we
issue a connect() to the target for an outbound connection.
If bind() fails, but neither with EADDRNOTAVAIL, nor with EACCESS, we
can conclude that the target address is a local (host) address, and we
can use an unlimited MSS.
While at it, according to the reasoning of that commit, if bind()
succeeds, we would know right away that nobody is listening at that
(local) address and port, and we don't even need to call connect(): we
can just fail early and reset the connection attempt.
But if non-local binds are enabled via net.ipv4.ip_nonlocal_bind or
net.ipv6.ip_nonlocal_bind sysctl, binding to a non-local address will
actually succeed, so we can't rely on it to fail in general.
The visible issue with the existing behaviour is that we would reset
any outbound connection to non-local addresses, if non-local binds are
enabled.
Keep the significant optimisation for local addresses along with the
bind() call, but if it succeeds, don't draw any conclusion: close the
socket, grab another one, and proceed normally.
This will incur a small latency penalty if non-local binds are
enabled (we'll likely fetch an existing socket from the pool but
additionally call close()), or if the target is local but not bound:
we'll need to call connect() and get a failure before relaying that
failure back.
Link: https://github.com/containers/podman/issues/23003
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2024-06-18 10:32:17 +00:00
|
|
|
/* Use bind() to check if the target address is local (EADDRINUSE or
|
|
|
|
* similar) and already bound, and set the LOCAL flag in that case.
|
|
|
|
*
|
|
|
|
* If bind() succeeds, in general, we could infer that nobody (else) is
|
|
|
|
* listening on that address and port and reset the connection attempt
|
|
|
|
* early, but we can't rely on that if non-local binds are enabled,
|
|
|
|
* because bind() would succeed for any non-local address we can reach.
|
|
|
|
*
|
|
|
|
* So, if bind() succeeds, close the socket, get a new one, and proceed.
|
|
|
|
*/
|
2024-07-18 05:26:28 +00:00
|
|
|
if (bind(s, &sa.sa, sl)) {
|
tcp: Don't rely on bind() to fail to decide that connection target is valid
Commit e1a2e2780c91 ("tcp: Check if connection is local or low RTT
was seen before using large MSS") added a call to bind() before we
issue a connect() to the target for an outbound connection.
If bind() fails, but neither with EADDRNOTAVAIL, nor with EACCESS, we
can conclude that the target address is a local (host) address, and we
can use an unlimited MSS.
While at it, according to the reasoning of that commit, if bind()
succeeds, we would know right away that nobody is listening at that
(local) address and port, and we don't even need to call connect(): we
can just fail early and reset the connection attempt.
But if non-local binds are enabled via net.ipv4.ip_nonlocal_bind or
net.ipv6.ip_nonlocal_bind sysctl, binding to a non-local address will
actually succeed, so we can't rely on it to fail in general.
The visible issue with the existing behaviour is that we would reset
any outbound connection to non-local addresses, if non-local binds are
enabled.
Keep the significant optimisation for local addresses along with the
bind() call, but if it succeeds, don't draw any conclusion: close the
socket, grab another one, and proceed normally.
This will incur a small latency penalty if non-local binds are
enabled (we'll likely fetch an existing socket from the pool but
additionally call close()), or if the target is local but not bound:
we'll need to call connect() and get a failure before relaying that
failure back.
Link: https://github.com/containers/podman/issues/23003
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2024-06-18 10:32:17 +00:00
|
|
|
if (errno != EADDRNOTAVAIL && errno != EACCES)
|
|
|
|
conn_flag(c, conn, LOCAL);
|
|
|
|
} else {
|
|
|
|
/* Not a local, bound destination, inconclusive test */
|
|
|
|
close(s);
|
|
|
|
if ((s = tcp_conn_sock(c, af)) < 0)
|
|
|
|
goto cancel;
|
|
|
|
}
|
|
|
|
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
conn->sock = s;
|
2022-03-18 11:18:19 +00:00
|
|
|
conn->timer = -1;
|
2022-03-15 00:07:02 +00:00
|
|
|
conn_event(c, conn, TAP_SYN_RCVD);
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
conn->wnd_to_tap = WINDOW_DEFAULT;
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
|
tcp: Clamp MSS value when queueing data to tap, also for pasta
Tom reports that a pattern of repated ~1 MiB chunks downloads over
NNTP over TLS, on Podman 4.4 using pasta as network back-end, results
in pasta taking one full CPU thread after a while, and the download
never succeeds.
On that setup, we end up re-sending the same frame over and over,
with a consistent 65 534 bytes size, and never get an
acknowledgement from the tap-side client. This only happens for the
default MTU value (65 520 bytes) or for values that are slightly
smaller than that (down to 64 499 bytes).
We hit this condition because the MSS value we use in
tcp_data_from_sock(), only in pasta mode, is simply clamped to
USHRT_MAX, and not to the actual size of the buffers we pre-cooked
for sending, which is a bit less than that.
It looks like we got away with it until commit 0fb7b2b9080a ("tap:
Use different io vector bases depending on tap type") fixed the
setting of iov_len.
Luckily, since it's pasta, we're queueing up to two frames at a time,
so the worst that can happen is a badly segmented TCP stream: we
always have some space at the tail of the buffer.
Clamp the MSS value to the appropriate maximum given by struct
tcp{4,6}_buf_data_t, no matter if we're running in pasta or passt
mode.
While at it, fix the comments to those structs to reflect the current
struct size. This is not really relevant for any further calculation
or consideration, but it's convenient to know while debugging this
kind of issues.
Thanks to Tom for reporting the issue in a very detailed way and for
providing a test setup.
Reported-by: Tom Mombourquette <tom@devnode.com>
Link: https://github.com/containers/podman/issues/17703
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 17:07:42 +00:00
|
|
|
mss = tcp_conn_tap_mss(conn, opts, optlen);
|
2022-04-05 05:10:30 +00:00
|
|
|
if (setsockopt(s, SOL_TCP, TCP_MAXSEG, &mss, sizeof(mss)))
|
2023-11-30 02:02:13 +00:00
|
|
|
flow_trace(conn, "failed to set TCP_MAXSEG on socket %i", s);
|
2022-03-20 07:16:06 +00:00
|
|
|
MSS_SET(conn, mss);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
tcp_get_tap_ws(conn, opts, optlen);
|
|
|
|
|
|
|
|
/* RFC 7323, 2.2: first value is not scaled. Also, don't clamp yet, to
|
|
|
|
* avoid getting a zero scale just because we set a small window now.
|
|
|
|
*/
|
|
|
|
if (!(conn->wnd_from_tap = (htons(th->window) >> conn->ws_from_tap)))
|
|
|
|
conn->wnd_from_tap = 1;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
conn->seq_init_from_tap = ntohl(th->seq);
|
|
|
|
conn->seq_from_tap = conn->seq_init_from_tap + 1;
|
|
|
|
conn->seq_ack_to_tap = conn->seq_from_tap;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2024-07-18 05:26:36 +00:00
|
|
|
hash = flow_hash_insert(c, TAP_SIDX(conn));
|
|
|
|
conn->seq_to_tap = tcp_init_seq(hash, now);
|
tcp: Don't special case the handling of the ack of a syn
TCP treats the SYN packets as though they occupied 1 byte in the logical
data stream described by the sequence numbers. That is, the very first ACK
(or SYN-ACK) each side sends should acknowledge a sequence number one
greater than the initial sequence number given in the SYN or SYN-ACK it's
responding to.
In passt we were tracking that by advancing conn->seq_to_tap by one when
we send a SYN or SYN-ACK (in tcp_send_flag()). However, we also
initialized conn->seq_ack_from_tap, representing the acks we've already
seen from the tap side, to ISN+1, meaning we treated it has having
acknowledged the SYN before it actually did.
There were apparently reasons for this in earlier versions, but it causes
problems now. Because of this when we actually did receive the initial ACK
or SYN-ACK, we wouldn't see the acknoweldged serial number as advancing,
and so wouldn't clear the ACK_FROM_TAP_DUE flag.
In most cases we'd get away because subsequent packets would clear the
flag. However if one (or both) sides didn't send any data, the other side
would (correctly) keep sending ISN+1 as the acknowledged sequence number,
meaning we would never clear the ACK_FROM_TAP_DUE flag. That would mean
we'd treat the connection as if we needed to retransmit (although we had
0 bytes to retransmit), and eventaully (after around 30s) reset the
connection due to too many retransmits. Specifically this could cause the
iperf3 throughput tests in the testsuite to fail if set for a long enough
test period.
Correct this by initializing conn->seq_ack_from_tap to the ISN and only
advancing it when we actually get the first ACK (or SYN-ACK).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-03-27 03:56:34 +00:00
|
|
|
conn->seq_ack_from_tap = conn->seq_to_tap;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2024-07-18 05:26:31 +00:00
|
|
|
tcp_bind_outbound(c, conn, s);
|
|
|
|
|
2024-07-18 05:26:28 +00:00
|
|
|
if (connect(s, &sa.sa, sl)) {
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
if (errno != EINPROGRESS) {
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
tcp_rst(c, conn);
|
2024-06-07 01:55:24 +00:00
|
|
|
goto cancel;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
2021-10-04 19:50:05 +00:00
|
|
|
tcp_get_sndbuf(conn);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
} else {
|
2021-10-04 19:50:05 +00:00
|
|
|
tcp_get_sndbuf(conn);
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (tcp_send_flag(c, conn, SYN | ACK))
|
2024-06-07 01:55:24 +00:00
|
|
|
goto cancel;
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
conn_event(c, conn, TAP_SYN_ACK_SENT);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
tcp_epoll_ctl(c, conn);
|
2024-05-21 05:57:05 +00:00
|
|
|
FLOW_ACTIVATE(conn);
|
2024-01-16 00:50:41 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
cancel:
|
|
|
|
if (s >= 0)
|
|
|
|
close(s);
|
|
|
|
flow_alloc_cancel(flow);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2023-09-22 21:08:29 +00:00
|
|
|
* tcp_sock_consume() - Consume (discard) data from buffer
|
2022-03-15 00:07:02 +00:00
|
|
|
* @conn: Connection pointer
|
|
|
|
* @ack_seq: ACK sequence, host order
|
|
|
|
*
|
|
|
|
* Return: 0 on success, negative error code from recv() on failure
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*/
|
2023-11-16 09:15:58 +00:00
|
|
|
#ifdef VALGRIND
|
|
|
|
/* valgrind doesn't realise that passing a NULL buffer to recv() is ok if using
|
|
|
|
* MSG_TRUNC. We have a suppression for this in the tests, but it relies on
|
|
|
|
* valgrind being able to see the tcp_sock_consume() stack frame, which it won't
|
|
|
|
* if this gets inlined. This has a single caller making it a likely inlining
|
|
|
|
* candidate, and certain compiler versions will do so even at -O0.
|
|
|
|
*/
|
|
|
|
__attribute__((noinline))
|
|
|
|
#endif /* VALGRIND */
|
2023-09-29 05:50:19 +00:00
|
|
|
static int tcp_sock_consume(const struct tcp_tap_conn *conn, uint32_t ack_seq)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
2022-03-15 00:07:02 +00:00
|
|
|
/* Simply ignore out-of-order ACKs: we already consumed the data we
|
|
|
|
* needed from the buffer, and we won't rewind back to a lower ACK
|
|
|
|
* sequence.
|
|
|
|
*/
|
|
|
|
if (SEQ_LE(ack_seq, conn->seq_ack_from_tap))
|
|
|
|
return 0;
|
2021-05-21 09:14:50 +00:00
|
|
|
|
2022-09-28 04:33:32 +00:00
|
|
|
/* cppcheck-suppress [nullPointer, unmatchedSuppression] */
|
2022-03-15 00:07:02 +00:00
|
|
|
if (recv(conn->sock, NULL, ack_seq - conn->seq_ack_from_tap,
|
|
|
|
MSG_DONTWAIT | MSG_TRUNC) < 0)
|
|
|
|
return -errno;
|
2021-04-22 15:03:43 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
return 0;
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
}
|
2021-03-17 09:57:40 +00:00
|
|
|
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
/**
|
|
|
|
* tcp_data_from_sock() - Handle new data from socket, queue to tap, in window
|
|
|
|
* @c: Execution context
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* @conn: Connection pointer
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*
|
2021-07-26 12:20:36 +00:00
|
|
|
* Return: negative on connection reset, 0 otherwise
|
2021-10-13 20:25:03 +00:00
|
|
|
*
|
|
|
|
* #syscalls recvmsg
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
static int tcp_data_from_sock(const struct ctx *c, struct tcp_tap_conn *conn)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
2024-06-13 12:36:49 +00:00
|
|
|
return tcp_buf_data_from_sock(c, conn);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
2021-07-26 23:09:45 +00:00
|
|
|
/**
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
* tcp_data_from_tap() - tap/guest data for established connection
|
2021-07-26 23:09:45 +00:00
|
|
|
* @c: Execution context
|
|
|
|
* @conn: Connection pointer
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
* @p: Pool of TCP packets, with TCP headers
|
2023-09-08 01:49:46 +00:00
|
|
|
* @idx: Index of first data packet in pool
|
2021-10-13 20:25:03 +00:00
|
|
|
*
|
|
|
|
* #syscalls sendmsg
|
2023-09-08 01:49:50 +00:00
|
|
|
*
|
|
|
|
* Return: count of consumed packets
|
2021-07-26 23:09:45 +00:00
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
static int tcp_data_from_tap(const struct ctx *c, struct tcp_tap_conn *conn,
|
|
|
|
const struct pool *p, int idx)
|
2021-07-26 23:09:45 +00:00
|
|
|
{
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
int i, iov_i, ack = 0, fin = 0, retr = 0, keep = -1, partial_send = 0;
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
uint16_t max_ack_seq_wnd = conn->wnd_from_tap;
|
2022-03-20 07:16:06 +00:00
|
|
|
uint32_t max_ack_seq = conn->seq_ack_from_tap;
|
2021-07-26 23:09:45 +00:00
|
|
|
uint32_t seq_from_tap = conn->seq_from_tap;
|
2022-03-15 00:07:02 +00:00
|
|
|
struct msghdr mh = { .msg_iov = tcp_iov };
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
size_t len;
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
ssize_t n;
|
2021-07-26 23:09:45 +00:00
|
|
|
|
2023-03-27 03:56:33 +00:00
|
|
|
if (conn->events == CLOSED)
|
2023-09-08 01:49:50 +00:00
|
|
|
return p->count - idx;
|
2023-03-27 03:56:33 +00:00
|
|
|
|
|
|
|
ASSERT(conn->events & ESTABLISHED);
|
|
|
|
|
2023-09-08 01:49:46 +00:00
|
|
|
for (i = idx, iov_i = 0; i < (int)p->count; i++) {
|
2021-07-26 23:09:45 +00:00
|
|
|
uint32_t seq, seq_offset, ack_seq;
|
2024-01-15 06:39:43 +00:00
|
|
|
const struct tcphdr *th;
|
2021-07-26 23:09:45 +00:00
|
|
|
char *data;
|
2021-09-26 21:38:22 +00:00
|
|
|
size_t off;
|
|
|
|
|
2023-09-08 01:49:48 +00:00
|
|
|
th = packet_get(p, i, 0, sizeof(*th), &len);
|
2023-09-08 01:49:52 +00:00
|
|
|
if (!th)
|
|
|
|
return -1;
|
2023-09-08 01:49:48 +00:00
|
|
|
len += sizeof(*th);
|
2021-07-26 23:09:45 +00:00
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
off = th->doff * 4UL;
|
2023-09-08 01:49:52 +00:00
|
|
|
if (off < sizeof(*th) || off > len)
|
|
|
|
return -1;
|
2021-07-26 23:09:45 +00:00
|
|
|
|
|
|
|
if (th->rst) {
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
conn_event(c, conn, CLOSED);
|
2023-09-08 01:49:51 +00:00
|
|
|
return 1;
|
2021-07-26 23:09:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
len -= off;
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
data = packet_get(p, i, off, len, NULL);
|
|
|
|
if (!data)
|
|
|
|
continue;
|
2021-07-26 23:09:45 +00:00
|
|
|
|
|
|
|
seq = ntohl(th->seq);
|
|
|
|
ack_seq = ntohl(th->ack_seq);
|
|
|
|
|
|
|
|
if (th->ack) {
|
|
|
|
ack = 1;
|
2021-08-03 23:35:45 +00:00
|
|
|
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
if (SEQ_GE(ack_seq, conn->seq_ack_from_tap) &&
|
|
|
|
SEQ_GE(ack_seq, max_ack_seq)) {
|
2021-08-03 23:35:45 +00:00
|
|
|
/* Fast re-transmit */
|
2021-09-14 14:50:09 +00:00
|
|
|
retr = !len && !th->fin &&
|
|
|
|
ack_seq == max_ack_seq &&
|
2021-10-16 14:58:16 +00:00
|
|
|
ntohs(th->window) == max_ack_seq_wnd;
|
2021-08-03 23:35:45 +00:00
|
|
|
|
|
|
|
max_ack_seq_wnd = ntohs(th->window);
|
2021-07-26 23:09:45 +00:00
|
|
|
max_ack_seq = ack_seq;
|
2021-08-03 23:35:45 +00:00
|
|
|
}
|
2021-07-26 23:09:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (th->fin)
|
|
|
|
fin = 1;
|
|
|
|
|
2021-08-03 23:35:45 +00:00
|
|
|
if (!len)
|
|
|
|
continue;
|
|
|
|
|
2021-07-26 23:09:45 +00:00
|
|
|
seq_offset = seq_from_tap - seq;
|
|
|
|
/* Use data from this buffer only in these two cases:
|
|
|
|
*
|
|
|
|
* , seq_from_tap , seq_from_tap
|
|
|
|
* |--------| <-- len |--------| <-- len
|
|
|
|
* '----' <-- offset ' <-- offset
|
|
|
|
* ^ seq ^ seq
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
* (offset >= 0, seq + len > seq_from_tap)
|
2021-07-26 23:09:45 +00:00
|
|
|
*
|
|
|
|
* discard in these two cases:
|
|
|
|
* , seq_from_tap , seq_from_tap
|
|
|
|
* |--------| <-- len |--------| <-- len
|
|
|
|
* '--------' <-- offset '-----| <- offset
|
|
|
|
* ^ seq ^ seq
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
* (offset >= 0, seq + len <= seq_from_tap)
|
2021-07-26 23:09:45 +00:00
|
|
|
*
|
|
|
|
* keep, look for another buffer, then go back, in this case:
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
* , seq_from_tap
|
|
|
|
* |--------| <-- len
|
|
|
|
* '===' <-- offset
|
|
|
|
* ^ seq
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
* (offset < 0)
|
2021-07-26 23:09:45 +00:00
|
|
|
*/
|
2021-10-05 17:51:03 +00:00
|
|
|
if (SEQ_GE(seq_offset, 0) && SEQ_LE(seq + len, seq_from_tap))
|
2021-07-26 23:09:45 +00:00
|
|
|
continue;
|
|
|
|
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
if (SEQ_LT(seq_offset, 0)) {
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
if (keep == -1)
|
2021-07-26 23:09:45 +00:00
|
|
|
keep = i;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
tcp_iov[iov_i].iov_base = data + seq_offset;
|
|
|
|
tcp_iov[iov_i].iov_len = len - seq_offset;
|
|
|
|
seq_from_tap += tcp_iov[iov_i].iov_len;
|
2021-07-26 23:09:45 +00:00
|
|
|
iov_i++;
|
|
|
|
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
if (keep == i)
|
2021-07-26 23:09:45 +00:00
|
|
|
keep = -1;
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
|
|
|
|
if (keep != -1)
|
|
|
|
i = keep - 1;
|
2021-07-26 23:09:45 +00:00
|
|
|
}
|
|
|
|
|
tcp: Reset ACK_FROM_TAP_DUE flag only as needed, update timer
David reports that TCP transfers might stall, especially with smaller
socket buffer sizes, because we reset the ACK_FROM_TAP_DUE flag, in
tcp_tap_handler(), whenever we receive an ACK segment, regardless of
its sequence number and the fact that we might still be waiting for
one. This way, we might fail to re-transmit frames on ACK timeouts.
We need, instead, to:
- indicate with the @retrans field only re-transmissions for the same
data sequences. If we make progress, it should be reset, given that
it's used to abort a connection when we exceed a given number of
re-transmissions for the same data
- unset the ACK_FROM_TAP_DUE flag if and only if the acknowledged
sequence is the same as the last one we sent, as suggested by David
- keep it set otherwise, if progress was done but not all the data we
sent was acknowledged, and update the expiration of the ACK timeout
Add a new helper for these purposes, tcp_update_seqack_from_tap().
To extend the ACK timeout, the new helper sets the ACK_FROM_TAP_DUE
flag, even if it was already set, and conn_flag_do() triggers a timer
update. This part should be revisited at a later time, because,
strictly speaking, ACK_FROM_TAP_DUE isn't a flag anymore. One
possibility might be to introduce another connection attribute for
events affecting timer deadlines.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Link: https://bugs.passt.top/show_bug.cgi?id=41
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Fixes: be5bbb9b0681 ("tcp: Rework timers to use timerfd instead of periodic bitmap scan")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-02-12 21:26:55 +00:00
|
|
|
/* On socket flush failure, pretend there was no ACK, try again later */
|
|
|
|
if (ack && !tcp_sock_consume(conn, max_ack_seq))
|
|
|
|
tcp_update_seqack_from_tap(c, conn, max_ack_seq);
|
2021-07-26 23:09:45 +00:00
|
|
|
|
tcp: Don't use TCP_WINDOW_CLAMP
On the L2 tap side, we see TCP headers and know the TCP window that the
ultimate receiver is advertising. In order to avoid unnecessary buffering
within passt/pasta (or by the kernel on passt/pasta's behalf) we attempt
to advertise that window back to the original sock-side sender using
TCP_WINDOW_CLAMP.
However, TCP_WINDOW_CLAMP just doesn't work like this. Prior to kernel
commit 3aa7857fe1d7 ("tcp: enable mid stream window clamp"), it simply
had no effect on established sockets. After that commit, it does affect
established sockets but doesn't behave the way we need:
* It appears to be designed only to shrink the window, not to allow it to
re-expand.
* More importantly, that commit has a serious bug where if the
setsockopt() is made when the existing kernel advertised window for the
socket happens to be zero, it will now become locked at zero, stopping
any further data from being received on the socket.
Since this has never worked as intended, simply remove it. It might be
possible to re-implement the intended behaviour by manipulating SO_RCVBUF,
so we leave a comment to that effect.
This kernel bug is the underlying cause of both the linked passt bug and
the linked podman bug. We attempted to fix this before with passt commit
d3192f67 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag").
However while that commit masked the bug for some cases, it didn't really
address the problem.
Fixes: d3192f67c492 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag")
Link: https://github.com/containers/podman/issues/20170
Link: https://bugs.passt.top/show_bug.cgi?id=74
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-11-09 09:54:00 +00:00
|
|
|
tcp_tap_window_update(conn, max_ack_seq_wnd);
|
tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag
It looks like we need it as workaround for this situation, readily
reproducible at least with a 6.5 Linux kernel, with default rmem_max
and wmem_max values:
- an iperf3 client on the host sends about 160 KiB, typically
segmented into five frames by passt. We read this data using
MSG_PEEK
- the iperf3 server on the guest starts receiving
- meanwhile, the host kernel advertised a zero-sized window to the
sender, as expected
- eventually, the guest acknowledges all the data sent so far, and
we drop it from the buffer, courtesy of tcp_sock_consume(), using
recv() with MSG_TRUNC
- the client, however, doesn't get an updated window value, and
even keepalive packets are answered with zero-window segments,
until the connection is closed
It looks like dropping data from a socket using MSG_TRUNC doesn't
cause a recalculation of the window, which would be expected as a
result of any receiving operation that invalidates data on a buffer
(that is, not with MSG_PEEK).
Strangely enough, setting TCP_WINDOW_CLAMP via setsockopt(), even to
the previous value we clamped to, forces a recalculation of the
window which is advertised to the sender.
I couldn't quite confirm this issue by following all the possible
code paths in the kernel, yet. If confirmed, this should be fixed in
the kernel, but meanwhile this workaround looks robust to me (and it
will be needed for backward compatibility anyway).
Reported-by: Matej Hrica <mhrica@redhat.com>
Link: https://bugs.passt.top/show_bug.cgi?id=74
Analysed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-09-22 21:21:20 +00:00
|
|
|
|
2021-08-03 23:35:45 +00:00
|
|
|
if (retr) {
|
2023-11-30 02:02:13 +00:00
|
|
|
flow_trace(conn,
|
|
|
|
"fast re-transmit, ACK: %u, previous sequence: %u",
|
|
|
|
max_ack_seq, conn->seq_to_tap);
|
2021-08-03 23:35:45 +00:00
|
|
|
conn->seq_to_tap = max_ack_seq;
|
2024-07-12 19:04:49 +00:00
|
|
|
if (tcp_set_peek_offset(conn->sock, 0)) {
|
|
|
|
tcp_rst(c, conn);
|
|
|
|
return -1;
|
|
|
|
}
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_data_from_sock(c, conn);
|
2021-08-03 23:35:45 +00:00
|
|
|
}
|
|
|
|
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
if (!iov_i)
|
2021-09-14 05:15:08 +00:00
|
|
|
goto out;
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
|
2021-07-26 23:09:45 +00:00
|
|
|
mh.msg_iovlen = iov_i;
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
eintr:
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
n = sendmsg(conn->sock, &mh, MSG_DONTWAIT | MSG_NOSIGNAL);
|
|
|
|
if (n < 0) {
|
|
|
|
if (errno == EPIPE) {
|
|
|
|
/* Here's the wrap, said the tap.
|
|
|
|
* In my pocket, said the socket.
|
|
|
|
* Then swiftly looked away and left.
|
|
|
|
*/
|
|
|
|
conn->seq_from_tap = seq_from_tap;
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_send_flag(c, conn, ACK);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
}
|
|
|
|
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
if (errno == EINTR)
|
|
|
|
goto eintr;
|
|
|
|
|
2021-07-26 23:09:45 +00:00
|
|
|
if (errno == EAGAIN || errno == EWOULDBLOCK) {
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_send_flag(c, conn, ACK_IF_NEEDED);
|
2023-09-08 01:49:50 +00:00
|
|
|
return p->count - idx;
|
2023-09-08 01:49:52 +00:00
|
|
|
|
2021-07-26 23:09:45 +00:00
|
|
|
}
|
2023-09-08 01:49:52 +00:00
|
|
|
return -1;
|
2021-07-26 23:09:45 +00:00
|
|
|
}
|
|
|
|
|
2022-01-25 19:21:18 +00:00
|
|
|
if (n < (int)(seq_from_tap - conn->seq_from_tap)) {
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
partial_send = 1;
|
2021-10-05 17:51:03 +00:00
|
|
|
conn->seq_from_tap += n;
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_send_flag(c, conn, ACK_IF_NEEDED);
|
2021-10-05 17:51:03 +00:00
|
|
|
} else {
|
|
|
|
conn->seq_from_tap += n;
|
2021-07-26 23:09:45 +00:00
|
|
|
}
|
|
|
|
|
2021-09-14 05:15:08 +00:00
|
|
|
out:
|
|
|
|
if (keep != -1) {
|
2022-03-20 07:16:06 +00:00
|
|
|
/* We use an 8-bit approximation here: the associated risk is
|
|
|
|
* that we skip a duplicate ACK on 8-bit sequence number
|
|
|
|
* collision. Fast retransmit is a SHOULD in RFC 5681, 3.2.
|
|
|
|
*/
|
|
|
|
if (conn->seq_dup_ack_approx != (conn->seq_from_tap & 0xff)) {
|
|
|
|
conn->seq_dup_ack_approx = conn->seq_from_tap & 0xff;
|
2024-03-26 05:42:21 +00:00
|
|
|
tcp_send_flag(c, conn, ACK | DUP_ACK);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
}
|
2023-09-08 01:49:50 +00:00
|
|
|
return p->count - idx;
|
2021-09-14 05:15:08 +00:00
|
|
|
}
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
if (ack && conn->events & TAP_FIN_SENT &&
|
|
|
|
conn->seq_ack_from_tap == conn->seq_to_tap)
|
|
|
|
conn_event(c, conn, TAP_FIN_ACKED);
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
if (fin && !partial_send) {
|
|
|
|
conn->seq_from_tap++;
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
conn_event(c, conn, TAP_FIN_RCVD);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
} else {
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_send_flag(c, conn, ACK_IF_NEEDED);
|
2021-07-26 23:09:45 +00:00
|
|
|
}
|
2023-09-08 01:49:50 +00:00
|
|
|
|
|
|
|
return p->count - idx;
|
2021-07-26 23:09:45 +00:00
|
|
|
}
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
/**
|
|
|
|
* tcp_conn_from_sock_finish() - Complete connection setup after connect()
|
|
|
|
* @c: Execution context
|
|
|
|
* @conn: Connection pointer
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
* @th: TCP header of SYN, ACK segment: caller MUST ensure it's there
|
|
|
|
* @opts: Pointer to start of options
|
|
|
|
* @optlen: Bytes in options: caller MUST ensure available length
|
2022-03-15 00:07:02 +00:00
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
static void tcp_conn_from_sock_finish(const struct ctx *c,
|
|
|
|
struct tcp_tap_conn *conn,
|
2022-03-26 06:23:21 +00:00
|
|
|
const struct tcphdr *th,
|
|
|
|
const char *opts, size_t optlen)
|
2022-03-15 00:07:02 +00:00
|
|
|
{
|
tcp: Don't use TCP_WINDOW_CLAMP
On the L2 tap side, we see TCP headers and know the TCP window that the
ultimate receiver is advertising. In order to avoid unnecessary buffering
within passt/pasta (or by the kernel on passt/pasta's behalf) we attempt
to advertise that window back to the original sock-side sender using
TCP_WINDOW_CLAMP.
However, TCP_WINDOW_CLAMP just doesn't work like this. Prior to kernel
commit 3aa7857fe1d7 ("tcp: enable mid stream window clamp"), it simply
had no effect on established sockets. After that commit, it does affect
established sockets but doesn't behave the way we need:
* It appears to be designed only to shrink the window, not to allow it to
re-expand.
* More importantly, that commit has a serious bug where if the
setsockopt() is made when the existing kernel advertised window for the
socket happens to be zero, it will now become locked at zero, stopping
any further data from being received on the socket.
Since this has never worked as intended, simply remove it. It might be
possible to re-implement the intended behaviour by manipulating SO_RCVBUF,
so we leave a comment to that effect.
This kernel bug is the underlying cause of both the linked passt bug and
the linked podman bug. We attempted to fix this before with passt commit
d3192f67 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag").
However while that commit masked the bug for some cases, it didn't really
address the problem.
Fixes: d3192f67c492 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag")
Link: https://github.com/containers/podman/issues/20170
Link: https://bugs.passt.top/show_bug.cgi?id=74
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-11-09 09:54:00 +00:00
|
|
|
tcp_tap_window_update(conn, ntohs(th->window));
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
tcp_get_tap_ws(conn, opts, optlen);
|
|
|
|
|
|
|
|
/* First value is not scaled */
|
|
|
|
if (!(conn->wnd_from_tap >>= conn->ws_from_tap))
|
|
|
|
conn->wnd_from_tap = 1;
|
|
|
|
|
tcp: Clamp MSS value when queueing data to tap, also for pasta
Tom reports that a pattern of repated ~1 MiB chunks downloads over
NNTP over TLS, on Podman 4.4 using pasta as network back-end, results
in pasta taking one full CPU thread after a while, and the download
never succeeds.
On that setup, we end up re-sending the same frame over and over,
with a consistent 65 534 bytes size, and never get an
acknowledgement from the tap-side client. This only happens for the
default MTU value (65 520 bytes) or for values that are slightly
smaller than that (down to 64 499 bytes).
We hit this condition because the MSS value we use in
tcp_data_from_sock(), only in pasta mode, is simply clamped to
USHRT_MAX, and not to the actual size of the buffers we pre-cooked
for sending, which is a bit less than that.
It looks like we got away with it until commit 0fb7b2b9080a ("tap:
Use different io vector bases depending on tap type") fixed the
setting of iov_len.
Luckily, since it's pasta, we're queueing up to two frames at a time,
so the worst that can happen is a badly segmented TCP stream: we
always have some space at the tail of the buffer.
Clamp the MSS value to the appropriate maximum given by struct
tcp{4,6}_buf_data_t, no matter if we're running in pasta or passt
mode.
While at it, fix the comments to those structs to reflect the current
struct size. This is not really relevant for any further calculation
or consideration, but it's convenient to know while debugging this
kind of issues.
Thanks to Tom for reporting the issue in a very detailed way and for
providing a test setup.
Reported-by: Tom Mombourquette <tom@devnode.com>
Link: https://github.com/containers/podman/issues/17703
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-03-08 17:07:42 +00:00
|
|
|
MSS_SET(conn, tcp_conn_tap_mss(conn, opts, optlen));
|
2022-03-15 00:07:02 +00:00
|
|
|
|
|
|
|
conn->seq_init_from_tap = ntohl(th->seq) + 1;
|
|
|
|
conn->seq_from_tap = conn->seq_init_from_tap;
|
|
|
|
conn->seq_ack_to_tap = conn->seq_from_tap;
|
|
|
|
|
|
|
|
conn_event(c, conn, ESTABLISHED);
|
2024-07-12 19:04:49 +00:00
|
|
|
if (tcp_set_peek_offset(conn->sock, 0)) {
|
|
|
|
tcp_rst(c, conn);
|
|
|
|
return;
|
|
|
|
}
|
2022-03-15 00:07:02 +00:00
|
|
|
|
tcp: Send "empty" handshake ACK before first data segment
Starting from commit 9178a9e3462d ("tcp: Always send an ACK segment
once the handshake is completed"), we always send an ACK segment,
without any payload, to complete the three-way handshake while
establishing a connection started from a socket.
We queue that segment after checking if we already have data to send
to the tap, which means that its sequence number is higher than any
segment with data we're sending in the same iteration, if any data is
available on the socket.
However, in tcp_defer_handler(), we first flush "flags" buffers, that
is, we send out segments without any data first, and then segments
with data, which means that our "empty" ACK is sent before the ACK
segment with data (if any), which has a lower sequence number.
This appears to be harmless as the guest or container will generally
reorder segments, but it looks rather weird and we can't exclude it's
actually causing problems.
Queue the empty ACK first, so that it gets a lower sequence number,
before checking for any data from the socket.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2024-10-14 22:17:24 +00:00
|
|
|
tcp_send_flag(c, conn, ACK);
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
/* The client might have sent data already, which we didn't
|
|
|
|
* dequeue waiting for SYN,ACK from tap -- check now.
|
|
|
|
*/
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_data_from_sock(c, conn);
|
2022-03-15 00:07:02 +00:00
|
|
|
}
|
|
|
|
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
/**
|
|
|
|
* tcp_tap_handler() - Handle packets from tap and state transitions
|
|
|
|
* @c: Execution context
|
2023-11-07 01:40:16 +00:00
|
|
|
* @pif: pif on which the packet is arriving
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* @af: Address family, AF_INET or AF_INET6
|
2023-08-22 05:29:53 +00:00
|
|
|
* @saddr: Source address
|
|
|
|
* @daddr: Destination address
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
* @p: Pool of TCP packets, with TCP headers
|
2023-09-08 01:49:46 +00:00
|
|
|
* @idx: Index of first packet in pool to process
|
udp: Connection tracking for ephemeral, local ports, and related fixes
As we support UDP forwarding for packets that are sent to local
ports, we actually need some kind of connection tracking for UDP.
While at it, this commit introduces a number of vaguely related fixes
for issues observed while trying this out. In detail:
- implement an explicit, albeit minimalistic, connection tracking
for UDP, to allow usage of ephemeral ports by the guest and by
the host at the same time, by binding them dynamically as needed,
and to allow mapping address changes for packets with a loopback
address as destination
- set the guest MAC address whenever we receive a packet from tap
instead of waiting for an ARP request, and set it to broadcast on
start, otherwise DHCPv6 might not work if all DHCPv6 requests time
out before the guest starts talking IPv4
- split context IPv6 address into address we assign, global or site
address seen on tap, and link-local address seen on tap, and make
sure we use the addresses we've seen as destination (link-local
choice depends on source address). Similarly, for IPv4, split into
address we assign and address we observe, and use the address we
observe as destination
- introduce a clock_gettime() syscall right after epoll_wait() wakes
up, so that we can remove all the other ones and pass the current
timestamp to tap and socket handlers -- this is additionally needed
by UDP to time out bindings to ephemeral ports and mappings between
loopback address and a local address
- rename sock_l4_add() to sock_l4(), no semantic changes intended
- include <arpa/inet.h> in passt.c before kernel headers so that we
can use <netinet/in.h> macros to check IPv6 address types, and
remove a duplicate <linux/ip.h> inclusion
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-04-29 14:59:20 +00:00
|
|
|
* @now: Current timestamp
|
2021-04-22 11:39:36 +00:00
|
|
|
*
|
|
|
|
* Return: count of consumed packets
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
int tcp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af,
|
2023-11-07 01:40:16 +00:00
|
|
|
const void *saddr, const void *daddr,
|
2023-09-08 01:49:46 +00:00
|
|
|
const struct pool *p, int idx, const struct timespec *now)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
2022-11-17 05:58:43 +00:00
|
|
|
struct tcp_tap_conn *conn;
|
2024-01-15 06:39:43 +00:00
|
|
|
const struct tcphdr *th;
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
size_t optlen, len;
|
2024-01-15 06:39:43 +00:00
|
|
|
const char *opts;
|
2024-07-18 05:26:35 +00:00
|
|
|
union flow *flow;
|
|
|
|
flow_sidx_t sidx;
|
2022-03-18 11:18:19 +00:00
|
|
|
int ack_due = 0;
|
2023-09-08 01:49:50 +00:00
|
|
|
int count;
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
|
2023-11-07 01:40:16 +00:00
|
|
|
(void)pif;
|
|
|
|
|
2023-09-08 01:49:48 +00:00
|
|
|
th = packet_get(p, idx, 0, sizeof(*th), &len);
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (!th)
|
|
|
|
return 1;
|
2023-09-08 01:49:48 +00:00
|
|
|
len += sizeof(*th);
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
|
|
|
|
optlen = th->doff * 4UL - sizeof(*th);
|
2022-04-05 03:54:18 +00:00
|
|
|
/* Static checkers might fail to see this: */
|
|
|
|
optlen = MIN(optlen, ((1UL << 4) /* from doff width */ - 6) * 4UL);
|
2023-09-08 01:49:46 +00:00
|
|
|
opts = packet_get(p, idx, sizeof(*th), optlen, NULL);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2024-07-18 05:26:35 +00:00
|
|
|
sidx = flow_lookup_af(c, IPPROTO_TCP, PIF_TAP, af, saddr, daddr,
|
|
|
|
ntohs(th->source), ntohs(th->dest));
|
|
|
|
flow = flow_at_sidx(sidx);
|
2022-03-15 00:07:02 +00:00
|
|
|
|
|
|
|
/* New connection from tap */
|
2024-07-18 05:26:35 +00:00
|
|
|
if (!flow) {
|
2022-04-05 10:37:54 +00:00
|
|
|
if (opts && th->syn && !th->ack)
|
2023-08-22 05:29:53 +00:00
|
|
|
tcp_conn_from_tap(c, af, saddr, daddr, th,
|
|
|
|
opts, optlen, now);
|
2021-04-22 11:39:36 +00:00
|
|
|
return 1;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
2024-07-18 05:26:35 +00:00
|
|
|
ASSERT(flow->f.type == FLOW_TCP);
|
|
|
|
ASSERT(pif_at_sidx(sidx) == PIF_TAP);
|
|
|
|
conn = &flow->tcp;
|
|
|
|
|
2023-11-30 02:02:13 +00:00
|
|
|
flow_trace(conn, "packet length %zu from tap", len);
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
if (th->rst) {
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
conn_event(c, conn, CLOSED);
|
2023-09-08 01:49:51 +00:00
|
|
|
return 1;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
tcp: Reset ACK_FROM_TAP_DUE flag only as needed, update timer
David reports that TCP transfers might stall, especially with smaller
socket buffer sizes, because we reset the ACK_FROM_TAP_DUE flag, in
tcp_tap_handler(), whenever we receive an ACK segment, regardless of
its sequence number and the fact that we might still be waiting for
one. This way, we might fail to re-transmit frames on ACK timeouts.
We need, instead, to:
- indicate with the @retrans field only re-transmissions for the same
data sequences. If we make progress, it should be reset, given that
it's used to abort a connection when we exceed a given number of
re-transmissions for the same data
- unset the ACK_FROM_TAP_DUE flag if and only if the acknowledged
sequence is the same as the last one we sent, as suggested by David
- keep it set otherwise, if progress was done but not all the data we
sent was acknowledged, and update the expiration of the ACK timeout
Add a new helper for these purposes, tcp_update_seqack_from_tap().
To extend the ACK timeout, the new helper sets the ACK_FROM_TAP_DUE
flag, even if it was already set, and conn_flag_do() triggers a timer
update. This part should be revisited at a later time, because,
strictly speaking, ACK_FROM_TAP_DUE isn't a flag anymore. One
possibility might be to introduce another connection attribute for
events affecting timer deadlines.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Link: https://bugs.passt.top/show_bug.cgi?id=41
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Fixes: be5bbb9b0681 ("tcp: Rework timers to use timerfd instead of periodic bitmap scan")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-02-12 21:26:55 +00:00
|
|
|
if (th->ack && !(conn->events & ESTABLISHED))
|
|
|
|
tcp_update_seqack_from_tap(c, conn, ntohl(th->ack_seq));
|
2022-03-18 11:18:19 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
/* Establishing connection from socket */
|
|
|
|
if (conn->events & SOCK_ACCEPTED) {
|
2023-09-08 01:49:52 +00:00
|
|
|
if (th->syn && th->ack && !th->fin) {
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
tcp_conn_from_sock_finish(c, conn, th, opts, optlen);
|
2023-09-08 01:49:52 +00:00
|
|
|
return 1;
|
|
|
|
}
|
2021-09-01 14:43:13 +00:00
|
|
|
|
2023-09-08 01:49:52 +00:00
|
|
|
goto reset;
|
2022-03-15 00:07:02 +00:00
|
|
|
}
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
/* Establishing connection from tap */
|
|
|
|
if (conn->events & TAP_SYN_RCVD) {
|
2023-09-08 01:49:52 +00:00
|
|
|
if (!(conn->events & TAP_SYN_ACK_SENT))
|
|
|
|
goto reset;
|
2021-07-26 12:20:36 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
conn_event(c, conn, ESTABLISHED);
|
2024-07-12 19:04:49 +00:00
|
|
|
if (tcp_set_peek_offset(conn->sock, 0))
|
|
|
|
goto reset;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
|
|
|
if (th->fin) {
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
conn->seq_from_tap++;
|
|
|
|
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
shutdown(conn->sock, SHUT_WR);
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_send_flag(c, conn, ACK);
|
2022-03-15 00:07:02 +00:00
|
|
|
conn_event(c, conn, SOCK_FIN_SENT);
|
|
|
|
|
tcp: Correct handling of FIN,ACK followed by SYN
When the guest tries to establish a connection, it could give up on it by
sending a FIN,ACK instead of a plain ACK to our SYN,ACK. It could then
make a new attempt to establish a connection with the same addresses and
ports with a new SYN.
Although it's unlikely, it could send the 2nd SYN very shortly after the
FIN,ACK resulting in both being received in the same batch of packets from
the tap interface.
Currently, we don't handle that correctly, when we receive a FIN,ACK on a
not fully established connection we discard the remaining packets in the
batch, and so will never process the 2nd SYN. Correct this by returning
1 from tcp_tap_handler() in this case, so we'll just consume the FIN,ACK
and continue to process the rest of the batch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-09-08 01:49:53 +00:00
|
|
|
return 1;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
2023-09-08 01:49:52 +00:00
|
|
|
if (!th->ack)
|
|
|
|
goto reset;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
tcp: Don't use TCP_WINDOW_CLAMP
On the L2 tap side, we see TCP headers and know the TCP window that the
ultimate receiver is advertising. In order to avoid unnecessary buffering
within passt/pasta (or by the kernel on passt/pasta's behalf) we attempt
to advertise that window back to the original sock-side sender using
TCP_WINDOW_CLAMP.
However, TCP_WINDOW_CLAMP just doesn't work like this. Prior to kernel
commit 3aa7857fe1d7 ("tcp: enable mid stream window clamp"), it simply
had no effect on established sockets. After that commit, it does affect
established sockets but doesn't behave the way we need:
* It appears to be designed only to shrink the window, not to allow it to
re-expand.
* More importantly, that commit has a serious bug where if the
setsockopt() is made when the existing kernel advertised window for the
socket happens to be zero, it will now become locked at zero, stopping
any further data from being received on the socket.
Since this has never worked as intended, simply remove it. It might be
possible to re-implement the intended behaviour by manipulating SO_RCVBUF,
so we leave a comment to that effect.
This kernel bug is the underlying cause of both the linked passt bug and
the linked podman bug. We attempted to fix this before with passt commit
d3192f67 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag").
However while that commit masked the bug for some cases, it didn't really
address the problem.
Fixes: d3192f67c492 ("tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag")
Link: https://github.com/containers/podman/issues/20170
Link: https://bugs.passt.top/show_bug.cgi?id=74
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-11-09 09:54:00 +00:00
|
|
|
tcp_tap_window_update(conn, ntohs(th->window));
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
|
|
|
|
tcp_data_from_sock(c, conn);
|
2021-09-01 14:43:13 +00:00
|
|
|
|
2023-09-08 01:49:46 +00:00
|
|
|
if (p->count - idx == 1)
|
2022-03-15 00:07:02 +00:00
|
|
|
return 1;
|
|
|
|
}
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
/* Established connections not accepting data from tap */
|
|
|
|
if (conn->events & TAP_FIN_RCVD) {
|
tcp: Reset ACK_FROM_TAP_DUE flag only as needed, update timer
David reports that TCP transfers might stall, especially with smaller
socket buffer sizes, because we reset the ACK_FROM_TAP_DUE flag, in
tcp_tap_handler(), whenever we receive an ACK segment, regardless of
its sequence number and the fact that we might still be waiting for
one. This way, we might fail to re-transmit frames on ACK timeouts.
We need, instead, to:
- indicate with the @retrans field only re-transmissions for the same
data sequences. If we make progress, it should be reset, given that
it's used to abort a connection when we exceed a given number of
re-transmissions for the same data
- unset the ACK_FROM_TAP_DUE flag if and only if the acknowledged
sequence is the same as the last one we sent, as suggested by David
- keep it set otherwise, if progress was done but not all the data we
sent was acknowledged, and update the expiration of the ACK timeout
Add a new helper for these purposes, tcp_update_seqack_from_tap().
To extend the ACK timeout, the new helper sets the ACK_FROM_TAP_DUE
flag, even if it was already set, and conn_flag_do() triggers a timer
update. This part should be revisited at a later time, because,
strictly speaking, ACK_FROM_TAP_DUE isn't a flag anymore. One
possibility might be to introduce another connection attribute for
events affecting timer deadlines.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Link: https://bugs.passt.top/show_bug.cgi?id=41
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Fixes: be5bbb9b0681 ("tcp: Rework timers to use timerfd instead of periodic bitmap scan")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-02-12 21:26:55 +00:00
|
|
|
tcp_update_seqack_from_tap(c, conn, ntohl(th->ack_seq));
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
if (conn->events & SOCK_FIN_RCVD &&
|
|
|
|
conn->seq_ack_from_tap == conn->seq_to_tap)
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
conn_event(c, conn, CLOSED);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Established connections accepting data from tap */
|
2023-09-08 01:49:50 +00:00
|
|
|
count = tcp_data_from_tap(c, conn, p, idx);
|
2023-09-08 01:49:52 +00:00
|
|
|
if (count == -1)
|
|
|
|
goto reset;
|
|
|
|
|
tcp: Force TCP_WINDOW_CLAMP before resetting STALLED flag
It looks like we need it as workaround for this situation, readily
reproducible at least with a 6.5 Linux kernel, with default rmem_max
and wmem_max values:
- an iperf3 client on the host sends about 160 KiB, typically
segmented into five frames by passt. We read this data using
MSG_PEEK
- the iperf3 server on the guest starts receiving
- meanwhile, the host kernel advertised a zero-sized window to the
sender, as expected
- eventually, the guest acknowledges all the data sent so far, and
we drop it from the buffer, courtesy of tcp_sock_consume(), using
recv() with MSG_TRUNC
- the client, however, doesn't get an updated window value, and
even keepalive packets are answered with zero-window segments,
until the connection is closed
It looks like dropping data from a socket using MSG_TRUNC doesn't
cause a recalculation of the window, which would be expected as a
result of any receiving operation that invalidates data on a buffer
(that is, not with MSG_PEEK).
Strangely enough, setting TCP_WINDOW_CLAMP via setsockopt(), even to
the previous value we clamped to, forces a recalculation of the
window which is advertised to the sender.
I couldn't quite confirm this issue by following all the possible
code paths in the kernel, yet. If confirmed, this should be fixed in
the kernel, but meanwhile this workaround looks robust to me (and it
will be needed for backward compatibility anyway).
Reported-by: Matej Hrica <mhrica@redhat.com>
Link: https://bugs.passt.top/show_bug.cgi?id=74
Analysed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
2023-09-22 21:21:20 +00:00
|
|
|
conn_flag(c, conn, ~STALLED);
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (conn->seq_ack_to_tap != conn->seq_from_tap)
|
|
|
|
ack_due = 1;
|
2022-03-15 00:07:02 +00:00
|
|
|
|
|
|
|
if ((conn->events & TAP_FIN_RCVD) && !(conn->events & SOCK_FIN_SENT)) {
|
|
|
|
shutdown(conn->sock, SHUT_WR);
|
|
|
|
conn_event(c, conn, SOCK_FIN_SENT);
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_send_flag(c, conn, ACK);
|
|
|
|
ack_due = 0;
|
2022-03-15 00:07:02 +00:00
|
|
|
}
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (ack_due)
|
|
|
|
conn_flag(c, conn, ACK_TO_TAP_DUE);
|
|
|
|
|
2023-09-08 01:49:50 +00:00
|
|
|
return count;
|
2023-09-08 01:49:52 +00:00
|
|
|
|
|
|
|
reset:
|
|
|
|
/* Something's gone wrong, so reset the connection. We discard
|
|
|
|
* remaining packets in the batch, since they'd be invalidated when our
|
|
|
|
* RST is received, even if otherwise good.
|
|
|
|
*/
|
|
|
|
tcp_rst(c, conn);
|
|
|
|
return p->count - idx;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* tcp_connect_finish() - Handle completion of connect() from EPOLLOUT event
|
|
|
|
* @c: Execution context
|
2022-03-15 00:07:02 +00:00
|
|
|
* @conn: Connection pointer
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
static void tcp_connect_finish(const struct ctx *c, struct tcp_tap_conn *conn)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
|
|
|
socklen_t sl;
|
|
|
|
int so;
|
|
|
|
|
|
|
|
sl = sizeof(so);
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
if (getsockopt(conn->sock, SOL_SOCKET, SO_ERROR, &so, &sl) || so) {
|
|
|
|
tcp_rst(c, conn);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (tcp_send_flag(c, conn, SYN | ACK))
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
return;
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
conn_event(c, conn, TAP_SYN_ACK_SENT);
|
2022-03-18 11:18:19 +00:00
|
|
|
conn_flag(c, conn, ACK_FROM_TAP_DUE);
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2022-11-17 05:58:51 +00:00
|
|
|
* tcp_tap_conn_from_sock() - Initialize state for non-spliced connection
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* @c: Execution context
|
2024-02-28 11:25:10 +00:00
|
|
|
* @flow: flow to initialise
|
2022-11-17 05:58:51 +00:00
|
|
|
* @s: Accepted socket
|
|
|
|
* @sa: Peer socket address (from accept())
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* @now: Current timestamp
|
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
static void tcp_tap_conn_from_sock(const struct ctx *c, union flow *flow,
|
|
|
|
int s, const struct timespec *now)
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
{
|
2024-07-18 05:26:43 +00:00
|
|
|
struct tcp_tap_conn *conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp);
|
2024-07-18 05:26:36 +00:00
|
|
|
uint64_t hash;
|
2024-05-21 05:57:07 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
conn->sock = s;
|
2022-03-18 11:18:19 +00:00
|
|
|
conn->timer = -1;
|
2022-03-20 07:16:06 +00:00
|
|
|
conn->ws_to_tap = conn->ws_from_tap = 0;
|
2022-03-15 00:07:02 +00:00
|
|
|
conn_event(c, conn, SOCK_ACCEPTED);
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
|
2024-07-18 05:26:36 +00:00
|
|
|
hash = flow_hash_insert(c, TAP_SIDX(conn));
|
|
|
|
conn->seq_to_tap = tcp_init_seq(hash, now);
|
2022-11-17 05:58:57 +00:00
|
|
|
|
tcp: Don't special case the handling of the ack of a syn
TCP treats the SYN packets as though they occupied 1 byte in the logical
data stream described by the sequence numbers. That is, the very first ACK
(or SYN-ACK) each side sends should acknowledge a sequence number one
greater than the initial sequence number given in the SYN or SYN-ACK it's
responding to.
In passt we were tracking that by advancing conn->seq_to_tap by one when
we send a SYN or SYN-ACK (in tcp_send_flag()). However, we also
initialized conn->seq_ack_from_tap, representing the acks we've already
seen from the tap side, to ISN+1, meaning we treated it has having
acknowledged the SYN before it actually did.
There were apparently reasons for this in earlier versions, but it causes
problems now. Because of this when we actually did receive the initial ACK
or SYN-ACK, we wouldn't see the acknoweldged serial number as advancing,
and so wouldn't clear the ACK_FROM_TAP_DUE flag.
In most cases we'd get away because subsequent packets would clear the
flag. However if one (or both) sides didn't send any data, the other side
would (correctly) keep sending ISN+1 as the acknowledged sequence number,
meaning we would never clear the ACK_FROM_TAP_DUE flag. That would mean
we'd treat the connection as if we needed to retransmit (although we had
0 bytes to retransmit), and eventaully (after around 30s) reset the
connection due to too many retransmits. Specifically this could cause the
iperf3 throughput tests in the testsuite to fail if set for a long enough
test period.
Correct this by initializing conn->seq_ack_from_tap to the ISN and only
advancing it when we actually get the first ACK (or SYN-ACK).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-03-27 03:56:34 +00:00
|
|
|
conn->seq_ack_from_tap = conn->seq_to_tap;
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
conn->wnd_from_tap = WINDOW_DEFAULT;
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_send_flag(c, conn, SYN);
|
|
|
|
conn_flag(c, conn, ACK_FROM_TAP_DUE);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2021-10-04 19:50:05 +00:00
|
|
|
tcp_get_sndbuf(conn);
|
2024-05-21 05:57:05 +00:00
|
|
|
|
|
|
|
FLOW_ACTIVATE(conn);
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
}
|
|
|
|
|
2022-11-17 05:58:51 +00:00
|
|
|
/**
|
2023-08-11 05:12:27 +00:00
|
|
|
* tcp_listen_handler() - Handle new connection request from listening socket
|
2022-11-17 05:58:51 +00:00
|
|
|
* @c: Execution context
|
|
|
|
* @ref: epoll reference of listening socket
|
|
|
|
* @now: Current timestamp
|
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
void tcp_listen_handler(const struct ctx *c, union epoll_ref ref,
|
2023-08-11 05:12:27 +00:00
|
|
|
const struct timespec *now)
|
2022-11-17 05:58:51 +00:00
|
|
|
{
|
2024-07-18 05:26:32 +00:00
|
|
|
const struct flowside *ini;
|
2024-02-28 11:25:04 +00:00
|
|
|
union sockaddr_inany sa;
|
2023-09-21 04:49:39 +00:00
|
|
|
socklen_t sl = sizeof(sa);
|
2023-11-30 02:02:09 +00:00
|
|
|
union flow *flow;
|
2022-11-17 05:58:51 +00:00
|
|
|
int s;
|
|
|
|
|
2024-07-17 00:36:02 +00:00
|
|
|
ASSERT(!c->no_tcp);
|
|
|
|
|
|
|
|
if (!(flow = flow_alloc()))
|
2022-11-17 05:58:51 +00:00
|
|
|
return;
|
|
|
|
|
2024-02-28 11:25:04 +00:00
|
|
|
s = accept4(ref.fd, &sa.sa, &sl, SOCK_NONBLOCK);
|
2022-11-17 05:58:51 +00:00
|
|
|
if (s < 0)
|
2024-01-16 00:50:41 +00:00
|
|
|
goto cancel;
|
2022-11-17 05:58:51 +00:00
|
|
|
|
2024-07-18 05:26:27 +00:00
|
|
|
/* FIXME: When listening port has a specific bound address, record that
|
2024-08-21 04:19:57 +00:00
|
|
|
* as our address
|
|
|
|
*/
|
2024-07-18 05:26:32 +00:00
|
|
|
ini = flow_initiate_sa(flow, ref.tcp_listen.pif, &sa,
|
|
|
|
ref.tcp_listen.port);
|
|
|
|
|
|
|
|
if (!inany_is_unicast(&ini->eaddr) || ini->eport == 0) {
|
|
|
|
char sastr[SOCKADDR_STRLEN];
|
|
|
|
|
|
|
|
err("Invalid endpoint from TCP accept(): %s",
|
|
|
|
sockaddr_ntop(&sa, sastr, sizeof(sastr)));
|
|
|
|
goto cancel;
|
tcp: Validate TCP endpoint addresses
TCP connections should typically not have wildcard addresses (0.0.0.0
or ::) nor a zero port number for either endpoint. It's not entirely
clear (at least to me) if it's strictly against the RFCs to do so, but
at any rate the socket interfaces often treat those values
specially[1], so it's not really possible to manipulate such
connections. Likewise they should not have broadcast or multicast
addresses for either endpoint.
However, nothing prevents a guest from creating a SYN packet with such
values, and it's not entirely clear what the effect on passt would be.
To ensure sane behaviour, explicitly check for this case and drop such
packets, logging a debug warning (we don't want a higher level,
because that would allow a guest to spam the logs).
We never expect such an address on an accept()ed socket either, but
just in case, check for it as well.
[1] Depending on context as "unknown", "match any" or "kernel, pick
something for me"
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-02-28 11:25:17 +00:00
|
|
|
}
|
|
|
|
|
2024-07-18 05:26:43 +00:00
|
|
|
if (!flow_target(c, flow, IPPROTO_TCP))
|
|
|
|
goto cancel;
|
|
|
|
|
|
|
|
switch (flow->f.pif[TGTSIDE]) {
|
|
|
|
case PIF_SPLICE:
|
|
|
|
case PIF_HOST:
|
|
|
|
tcp_splice_conn_from_sock(c, flow, s);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case PIF_TAP:
|
|
|
|
tcp_tap_conn_from_sock(c, flow, s, now);
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
flow_err(flow, "No support for forwarding TCP from %s to %s",
|
|
|
|
pif_name(flow->f.pif[INISIDE]),
|
|
|
|
pif_name(flow->f.pif[TGTSIDE]));
|
|
|
|
goto cancel;
|
|
|
|
}
|
2022-11-17 05:58:52 +00:00
|
|
|
|
2024-01-16 00:50:41 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
cancel:
|
|
|
|
flow_alloc_cancel(flow);
|
2022-11-17 05:58:51 +00:00
|
|
|
}
|
|
|
|
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
/**
|
2022-03-18 11:18:19 +00:00
|
|
|
* tcp_timer_handler() - timerfd events: close, send ACK, retransmit, or reset
|
|
|
|
* @c: Execution context
|
|
|
|
* @ref: epoll reference of timer (not connection)
|
|
|
|
*
|
2024-08-19 22:46:06 +00:00
|
|
|
* #syscalls timerfd_gettime arm:timerfd_gettime64 i686:timerfd_gettime64
|
2022-03-18 11:18:19 +00:00
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
void tcp_timer_handler(const struct ctx *c, union epoll_ref ref)
|
2022-03-18 11:18:19 +00:00
|
|
|
{
|
|
|
|
struct itimerspec check_armed = { { 0 }, { 0 } };
|
2024-07-17 04:52:18 +00:00
|
|
|
struct tcp_tap_conn *conn = &FLOW(ref.flow)->tcp;
|
2022-03-18 11:18:19 +00:00
|
|
|
|
2024-07-17 00:36:02 +00:00
|
|
|
ASSERT(!c->no_tcp);
|
2024-07-17 04:52:18 +00:00
|
|
|
ASSERT(conn->f.type == FLOW_TCP);
|
2022-03-18 11:18:19 +00:00
|
|
|
|
|
|
|
/* We don't reset timers on ~ACK_FROM_TAP_DUE, ~ACK_TO_TAP_DUE. If the
|
|
|
|
* timer is currently armed, this event came from a previous setting,
|
|
|
|
* and we just set the timer to a new point in the future: discard it.
|
|
|
|
*/
|
2024-10-24 22:29:50 +00:00
|
|
|
if (timerfd_gettime(conn->timer, &check_armed))
|
|
|
|
flow_err(conn, "failed to read timer: %s", strerror(errno));
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (check_armed.it_value.tv_sec || check_armed.it_value.tv_nsec)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (conn->flags & ACK_TO_TAP_DUE) {
|
|
|
|
tcp_send_flag(c, conn, ACK_IF_NEEDED);
|
tcp: Don't reset ACK_TO_TAP_DUE on any ACK, reschedule timer as needed
This is mostly symmetric with commit cc6d8286d104 ("tcp: Reset
ACK_FROM_TAP_DUE flag only as needed, update timer"): we shouldn't
reset the ACK_TO_TAP_DUE flag on any inbound ACK segment, but only
once we acknowledge everything we received from the guest or the
container.
If we don't, a client might unnecessarily hold off further data,
especially during slow start, and in general we won't converge to the
usable bandwidth.
This is very visible especially with traffic tests on links with
non-negligible latency, such as in the reported issue. There, a
public iperf3 server sometimes aborts the test due do what appears to
be a low iperf3's --rcv-timeout (probably less than a second). Even
if this doesn't happen, the throughput will converge to a fraction of
the usable bandwidth.
Clear ACK_TO_TAP_DUE if we acknowledged everything, set it if we
didn't, and reschedule the timer in case the flag is still set as the
timer expires.
While at it, decrease the ACK timer interval to 10ms.
A 50ms interval is short enough for any bandwidth-delay product I had
in mind (local connections, or non-local connections with limited
bandwidth), but here I am, testing 1gbps transfers to a peer with
100ms RTT.
Indeed, we could eventually make the timer interval dependent on the
current window and estimated bandwidth-delay product, but at least
for the moment being, 10ms should be long enough to avoid any
measurable syscall overhead, yet usable for any real-world
application.
Reported-by: Lukas Mrtvy <lukas.mrtvy@gmail.com>
Link: https://bugs.passt.top/show_bug.cgi?id=44
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-03-21 22:14:58 +00:00
|
|
|
tcp_timer_ctl(c, conn);
|
2022-03-18 11:18:19 +00:00
|
|
|
} else if (conn->flags & ACK_FROM_TAP_DUE) {
|
|
|
|
if (!(conn->events & ESTABLISHED)) {
|
2023-11-30 02:02:13 +00:00
|
|
|
flow_dbg(conn, "handshake timeout");
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_rst(c, conn);
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
} else if (CONN_HAS(conn, SOCK_FIN_SENT | TAP_FIN_ACKED)) {
|
2023-11-30 02:02:13 +00:00
|
|
|
flow_dbg(conn, "FIN timeout");
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_rst(c, conn);
|
|
|
|
} else if (conn->retrans == TCP_MAX_RETRANS) {
|
2023-11-30 02:02:13 +00:00
|
|
|
flow_dbg(conn, "retransmissions count exceeded");
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_rst(c, conn);
|
|
|
|
} else {
|
2023-11-30 02:02:13 +00:00
|
|
|
flow_dbg(conn, "ACK timeout, retry");
|
2022-03-18 11:18:19 +00:00
|
|
|
conn->retrans++;
|
|
|
|
conn->seq_to_tap = conn->seq_ack_from_tap;
|
2024-07-12 19:04:49 +00:00
|
|
|
if (tcp_set_peek_offset(conn->sock, 0)) {
|
|
|
|
tcp_rst(c, conn);
|
|
|
|
} else {
|
|
|
|
tcp_data_from_sock(c, conn);
|
|
|
|
tcp_timer_ctl(c, conn);
|
|
|
|
}
|
2022-03-18 11:18:19 +00:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
struct itimerspec new = { { 0 }, { ACT_TIMEOUT, 0 } };
|
|
|
|
struct itimerspec old = { { 0 }, { 0 } };
|
|
|
|
|
|
|
|
/* Activity timeout: if it was already set, reset the
|
|
|
|
* connection, otherwise, it was a left-over from ACK_TO_TAP_DUE
|
|
|
|
* or ACK_FROM_TAP_DUE, so just set the long timeout in that
|
|
|
|
* case. This avoids having to preemptively reset the timer on
|
|
|
|
* ~ACK_TO_TAP_DUE or ~ACK_FROM_TAP_DUE.
|
|
|
|
*/
|
2024-10-24 22:29:50 +00:00
|
|
|
if (timerfd_settime(conn->timer, 0, &new, &old))
|
|
|
|
flow_err(conn, "failed to set timer: %s",
|
|
|
|
strerror(errno));
|
|
|
|
|
2022-03-18 11:18:19 +00:00
|
|
|
if (old.it_value.tv_sec == ACT_TIMEOUT) {
|
2023-11-30 02:02:13 +00:00
|
|
|
flow_dbg(conn, "activity timeout");
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_rst(c, conn);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2024-01-16 00:50:38 +00:00
|
|
|
* tcp_sock_handler() - Handle new data from non-spliced socket
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* @c: Execution context
|
2024-01-16 00:50:38 +00:00
|
|
|
* @ref: epoll reference
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* @events: epoll events bitmap
|
|
|
|
*/
|
2024-09-18 01:53:05 +00:00
|
|
|
void tcp_sock_handler(const struct ctx *c, union epoll_ref ref,
|
|
|
|
uint32_t events)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
2024-07-17 04:52:18 +00:00
|
|
|
struct tcp_tap_conn *conn = conn_at_sidx(ref.flowside);
|
2024-01-16 00:50:38 +00:00
|
|
|
|
2024-07-17 00:36:02 +00:00
|
|
|
ASSERT(!c->no_tcp);
|
2024-07-17 04:52:18 +00:00
|
|
|
ASSERT(pif_at_sidx(ref.flowside) != PIF_TAP);
|
2024-01-16 00:50:38 +00:00
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (conn->events == CLOSED)
|
|
|
|
return;
|
|
|
|
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
if (events & EPOLLERR) {
|
2022-03-15 00:07:02 +00:00
|
|
|
tcp_rst(c, conn);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
if ((conn->events & TAP_FIN_SENT) && (events & EPOLLHUP)) {
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
conn_event(c, conn, CLOSED);
|
tcp: Fixes for closing states, spliced connections, out-of-order packets, etc.
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-09 13:16:46 +00:00
|
|
|
return;
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
if (conn->events & ESTABLISHED) {
|
|
|
|
if (CONN_HAS(conn, SOCK_FIN_SENT | TAP_FIN_ACKED))
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
conn_event(c, conn, CLOSED);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
if (events & (EPOLLRDHUP | EPOLLHUP))
|
|
|
|
conn_event(c, conn, SOCK_FIN_RCVD);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
if (events & EPOLLIN)
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_data_from_sock(c, conn);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
if (events & EPOLLOUT)
|
2024-09-18 01:53:07 +00:00
|
|
|
tcp_update_seqack_wnd(c, conn, false, NULL);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
return;
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
}
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
/* EPOLLHUP during handshake: reset */
|
|
|
|
if (events & EPOLLHUP) {
|
|
|
|
tcp_rst(c, conn);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
return;
|
2022-03-15 00:07:02 +00:00
|
|
|
}
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
/* Data during handshake tap-side: check later */
|
|
|
|
if (conn->events & SOCK_ACCEPTED)
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
return;
|
|
|
|
|
2022-03-15 00:07:02 +00:00
|
|
|
if (conn->events == TAP_SYN_RCVD) {
|
|
|
|
if (events & EPOLLOUT)
|
2022-03-18 11:18:19 +00:00
|
|
|
tcp_connect_finish(c, conn);
|
2022-03-15 00:07:02 +00:00
|
|
|
/* Data? Check later */
|
|
|
|
}
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
}
|
|
|
|
|
2021-09-27 03:24:30 +00:00
|
|
|
/**
|
2024-09-20 04:12:43 +00:00
|
|
|
* tcp_sock_init_one() - Initialise listening socket for address and port
|
2021-09-27 03:24:30 +00:00
|
|
|
* @c: Execution context
|
2024-09-20 04:12:43 +00:00
|
|
|
* @addr: Pointer to address for binding, NULL for dual stack any
|
2022-10-07 02:53:40 +00:00
|
|
|
* @ifname: Name of interface to bind to, NULL if not configured
|
2024-09-20 04:12:43 +00:00
|
|
|
* @port: Port, host order
|
2022-11-17 05:59:05 +00:00
|
|
|
*
|
2023-03-08 11:14:29 +00:00
|
|
|
* Return: fd for the new listening socket, negative error code on failure
|
2021-09-27 03:24:30 +00:00
|
|
|
*/
|
2024-09-20 04:12:43 +00:00
|
|
|
static int tcp_sock_init_one(const struct ctx *c, const union inany_addr *addr,
|
|
|
|
const char *ifname, in_port_t port)
|
2021-09-27 03:24:30 +00:00
|
|
|
{
|
2023-08-11 05:12:27 +00:00
|
|
|
union tcp_listen_epoll_ref tref = {
|
2024-02-28 11:25:06 +00:00
|
|
|
.port = port,
|
2023-11-07 01:40:15 +00:00
|
|
|
.pif = PIF_HOST,
|
2023-08-11 05:12:27 +00:00
|
|
|
};
|
2021-09-27 03:24:30 +00:00
|
|
|
int s;
|
|
|
|
|
2024-09-20 04:12:43 +00:00
|
|
|
s = pif_sock_l4(c, EPOLL_TYPE_TCP_LISTEN, PIF_HOST, addr,
|
2024-09-20 04:12:42 +00:00
|
|
|
ifname, port, tref.u32);
|
2022-05-01 04:36:34 +00:00
|
|
|
|
2022-11-17 05:59:08 +00:00
|
|
|
if (c->tcp.fwd_in.mode == FWD_AUTO) {
|
2024-09-20 04:12:43 +00:00
|
|
|
if (!addr || inany_v4(addr))
|
2023-03-08 11:14:29 +00:00
|
|
|
tcp_sock_init_ext[port][V4] = s < 0 ? -1 : s;
|
2024-09-20 04:12:43 +00:00
|
|
|
if (!addr || !inany_v4(addr))
|
2023-03-08 11:14:29 +00:00
|
|
|
tcp_sock_init_ext[port][V6] = s < 0 ? -1 : s;
|
2022-11-17 05:59:08 +00:00
|
|
|
}
|
Don't create 'tap' socket for ports that are bound to loopback only
If the user specifies an explicit loopback address for a port
binding, we're going to use that address for the 'tap' socket, and
the same exact address for the 'spliced' socket (because those are,
by definition, only bound to loopback addresses).
This means that the second binding will fail, and, unexpectedly, the
port is forwarded, but via tap device, which means the source address
in the namespace won't be a loopback address.
Make it explicit under which conditions we're creating which kind of
socket, by refactoring tcp_sock_init() into two separate functions
for IPv4 and IPv6 and gathering those conditions at the beginning.
Also, don't create spliced sockets if the user specifies explicitly
a non-loopback address, those are harmless but not desired either.
Fixes: 3c6ae625101a ("conf, tcp, udp: Allow address specification for forwarded ports")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-10-12 14:48:52 +00:00
|
|
|
|
2022-11-17 05:59:05 +00:00
|
|
|
if (s < 0)
|
2023-03-08 11:14:29 +00:00
|
|
|
return s;
|
2022-05-01 04:36:34 +00:00
|
|
|
|
2022-11-17 05:59:05 +00:00
|
|
|
tcp_sock_set_bufsize(c, s);
|
|
|
|
return s;
|
2021-09-27 03:24:30 +00:00
|
|
|
}
|
|
|
|
|
Don't create 'tap' socket for ports that are bound to loopback only
If the user specifies an explicit loopback address for a port
binding, we're going to use that address for the 'tap' socket, and
the same exact address for the 'spliced' socket (because those are,
by definition, only bound to loopback addresses).
This means that the second binding will fail, and, unexpectedly, the
port is forwarded, but via tap device, which means the source address
in the namespace won't be a loopback address.
Make it explicit under which conditions we're creating which kind of
socket, by refactoring tcp_sock_init() into two separate functions
for IPv4 and IPv6 and gathering those conditions at the beginning.
Also, don't create spliced sockets if the user specifies explicitly
a non-loopback address, those are harmless but not desired either.
Fixes: 3c6ae625101a ("conf, tcp, udp: Allow address specification for forwarded ports")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-10-12 14:48:52 +00:00
|
|
|
/**
|
2022-11-17 05:58:50 +00:00
|
|
|
* tcp_sock_init() - Create listening sockets for a given host ("inbound") port
|
Don't create 'tap' socket for ports that are bound to loopback only
If the user specifies an explicit loopback address for a port
binding, we're going to use that address for the 'tap' socket, and
the same exact address for the 'spliced' socket (because those are,
by definition, only bound to loopback addresses).
This means that the second binding will fail, and, unexpectedly, the
port is forwarded, but via tap device, which means the source address
in the namespace won't be a loopback address.
Make it explicit under which conditions we're creating which kind of
socket, by refactoring tcp_sock_init() into two separate functions
for IPv4 and IPv6 and gathering those conditions at the beginning.
Also, don't create spliced sockets if the user specifies explicitly
a non-loopback address, those are harmless but not desired either.
Fixes: 3c6ae625101a ("conf, tcp, udp: Allow address specification for forwarded ports")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-10-12 14:48:52 +00:00
|
|
|
* @c: Execution context
|
|
|
|
* @addr: Pointer to address for binding, NULL if not configured
|
|
|
|
* @ifname: Name of interface to bind to, NULL if not configured
|
|
|
|
* @port: Port, host order
|
2023-02-16 00:29:55 +00:00
|
|
|
*
|
2023-03-08 11:14:29 +00:00
|
|
|
* Return: 0 on (partial) success, negative error code on (complete) failure
|
Don't create 'tap' socket for ports that are bound to loopback only
If the user specifies an explicit loopback address for a port
binding, we're going to use that address for the 'tap' socket, and
the same exact address for the 'spliced' socket (because those are,
by definition, only bound to loopback addresses).
This means that the second binding will fail, and, unexpectedly, the
port is forwarded, but via tap device, which means the source address
in the namespace won't be a loopback address.
Make it explicit under which conditions we're creating which kind of
socket, by refactoring tcp_sock_init() into two separate functions
for IPv4 and IPv6 and gathering those conditions at the beginning.
Also, don't create spliced sockets if the user specifies explicitly
a non-loopback address, those are harmless but not desired either.
Fixes: 3c6ae625101a ("conf, tcp, udp: Allow address specification for forwarded ports")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-10-12 14:48:52 +00:00
|
|
|
*/
|
2024-09-20 04:12:43 +00:00
|
|
|
int tcp_sock_init(const struct ctx *c, const union inany_addr *addr,
|
2023-02-16 00:29:55 +00:00
|
|
|
const char *ifname, in_port_t port)
|
Don't create 'tap' socket for ports that are bound to loopback only
If the user specifies an explicit loopback address for a port
binding, we're going to use that address for the 'tap' socket, and
the same exact address for the 'spliced' socket (because those are,
by definition, only bound to loopback addresses).
This means that the second binding will fail, and, unexpectedly, the
port is forwarded, but via tap device, which means the source address
in the namespace won't be a loopback address.
Make it explicit under which conditions we're creating which kind of
socket, by refactoring tcp_sock_init() into two separate functions
for IPv4 and IPv6 and gathering those conditions at the beginning.
Also, don't create spliced sockets if the user specifies explicitly
a non-loopback address, those are harmless but not desired either.
Fixes: 3c6ae625101a ("conf, tcp, udp: Allow address specification for forwarded ports")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-10-12 14:48:52 +00:00
|
|
|
{
|
2023-08-11 05:12:21 +00:00
|
|
|
int r4 = FD_REF_MAX + 1, r6 = FD_REF_MAX + 1;
|
2023-02-16 00:29:55 +00:00
|
|
|
|
2024-07-17 00:36:02 +00:00
|
|
|
ASSERT(!c->no_tcp);
|
|
|
|
|
2024-09-20 04:12:43 +00:00
|
|
|
if (!addr && c->ifi4 && c->ifi6)
|
2022-11-17 05:59:08 +00:00
|
|
|
/* Attempt to get a dual stack socket */
|
2024-09-20 04:12:43 +00:00
|
|
|
if (tcp_sock_init_one(c, NULL, ifname, port) >= 0)
|
2023-02-16 00:29:55 +00:00
|
|
|
return 0;
|
2022-11-17 05:59:08 +00:00
|
|
|
|
|
|
|
/* Otherwise create a socket per IP version */
|
2024-09-20 04:12:43 +00:00
|
|
|
if ((!addr || inany_v4(addr)) && c->ifi4)
|
|
|
|
r4 = tcp_sock_init_one(c, addr ? addr : &inany_any4,
|
|
|
|
ifname, port);
|
2023-02-16 00:29:55 +00:00
|
|
|
|
2024-09-20 04:12:43 +00:00
|
|
|
if ((!addr || !inany_v4(addr)) && c->ifi6)
|
|
|
|
r6 = tcp_sock_init_one(c, addr ? addr : &inany_any6,
|
|
|
|
ifname, port);
|
2023-02-16 00:29:55 +00:00
|
|
|
|
2023-08-11 05:12:21 +00:00
|
|
|
if (IN_INTERVAL(0, FD_REF_MAX, r4) || IN_INTERVAL(0, FD_REF_MAX, r6))
|
2023-03-08 11:38:39 +00:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
return r4 < 0 ? r4 : r6;
|
2022-11-17 05:58:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* tcp_ns_sock_init4() - Init socket to listen for outbound IPv4 connections
|
|
|
|
* @c: Execution context
|
|
|
|
* @port: Port, host order
|
|
|
|
*/
|
|
|
|
static void tcp_ns_sock_init4(const struct ctx *c, in_port_t port)
|
|
|
|
{
|
2023-08-11 05:12:27 +00:00
|
|
|
union tcp_listen_epoll_ref tref = {
|
2024-02-28 11:25:06 +00:00
|
|
|
.port = port,
|
2023-11-07 01:40:15 +00:00
|
|
|
.pif = PIF_SPLICE,
|
2023-08-11 05:12:27 +00:00
|
|
|
};
|
2022-11-17 05:58:50 +00:00
|
|
|
int s;
|
|
|
|
|
2023-01-16 04:15:27 +00:00
|
|
|
ASSERT(c->mode == MODE_PASTA);
|
2022-11-17 05:58:50 +00:00
|
|
|
|
2024-09-20 04:12:42 +00:00
|
|
|
s = pif_sock_l4(c, EPOLL_TYPE_TCP_LISTEN, PIF_SPLICE, &inany_loopback4,
|
|
|
|
NULL, port, tref.u32);
|
2022-11-17 05:58:50 +00:00
|
|
|
if (s >= 0)
|
|
|
|
tcp_sock_set_bufsize(c, s);
|
|
|
|
else
|
|
|
|
s = -1;
|
|
|
|
|
|
|
|
if (c->tcp.fwd_out.mode == FWD_AUTO)
|
|
|
|
tcp_sock_ns[port][V4] = s;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* tcp_ns_sock_init6() - Init socket to listen for outbound IPv6 connections
|
|
|
|
* @c: Execution context
|
|
|
|
* @port: Port, host order
|
|
|
|
*/
|
|
|
|
static void tcp_ns_sock_init6(const struct ctx *c, in_port_t port)
|
|
|
|
{
|
2023-08-11 05:12:27 +00:00
|
|
|
union tcp_listen_epoll_ref tref = {
|
2024-02-28 11:25:06 +00:00
|
|
|
.port = port,
|
2023-11-07 01:40:15 +00:00
|
|
|
.pif = PIF_SPLICE,
|
2023-08-11 05:12:27 +00:00
|
|
|
};
|
2022-11-17 05:58:50 +00:00
|
|
|
int s;
|
|
|
|
|
2023-01-16 04:15:27 +00:00
|
|
|
ASSERT(c->mode == MODE_PASTA);
|
2022-11-17 05:58:50 +00:00
|
|
|
|
2024-09-20 04:12:42 +00:00
|
|
|
s = pif_sock_l4(c, EPOLL_TYPE_TCP_LISTEN, PIF_SPLICE, &inany_loopback6,
|
|
|
|
NULL, port, tref.u32);
|
2022-11-17 05:58:50 +00:00
|
|
|
if (s >= 0)
|
|
|
|
tcp_sock_set_bufsize(c, s);
|
|
|
|
else
|
|
|
|
s = -1;
|
|
|
|
|
|
|
|
if (c->tcp.fwd_out.mode == FWD_AUTO)
|
|
|
|
tcp_sock_ns[port][V6] = s;
|
Don't create 'tap' socket for ports that are bound to loopback only
If the user specifies an explicit loopback address for a port
binding, we're going to use that address for the 'tap' socket, and
the same exact address for the 'spliced' socket (because those are,
by definition, only bound to loopback addresses).
This means that the second binding will fail, and, unexpectedly, the
port is forwarded, but via tap device, which means the source address
in the namespace won't be a loopback address.
Make it explicit under which conditions we're creating which kind of
socket, by refactoring tcp_sock_init() into two separate functions
for IPv4 and IPv6 and gathering those conditions at the beginning.
Also, don't create spliced sockets if the user specifies explicitly
a non-loopback address, those are harmless but not desired either.
Fixes: 3c6ae625101a ("conf, tcp, udp: Allow address specification for forwarded ports")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-10-12 14:48:52 +00:00
|
|
|
}
|
|
|
|
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
/**
|
2022-11-17 05:58:50 +00:00
|
|
|
* tcp_ns_sock_init() - Init socket to listen for spliced outbound connections
|
|
|
|
* @c: Execution context
|
|
|
|
* @port: Port, host order
|
|
|
|
*/
|
|
|
|
void tcp_ns_sock_init(const struct ctx *c, in_port_t port)
|
|
|
|
{
|
2024-07-17 00:36:02 +00:00
|
|
|
ASSERT(!c->no_tcp);
|
|
|
|
|
2022-11-17 05:58:50 +00:00
|
|
|
if (c->ifi4)
|
|
|
|
tcp_ns_sock_init4(c, port);
|
|
|
|
if (c->ifi6)
|
|
|
|
tcp_ns_sock_init6(c, port);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* tcp_ns_socks_init() - Bind sockets in namespace for outbound connections
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
* @arg: Execution context
|
|
|
|
*
|
passt, pasta: Namespace-based sandboxing, defer seccomp policy application
To reach (at least) a conceptually equivalent security level as
implemented by --enable-sandbox in slirp4netns, we need to create a
new mount namespace and pivot_root() into a new (empty) mountpoint, so
that passt and pasta can't access any filesystem resource after
initialisation.
While at it, also detach IPC, PID (only for passt, to prevent
vulnerabilities based on the knowledge of a target PID), and UTS
namespaces.
With this approach, if we apply the seccomp filters right after the
configuration step, the number of allowed syscalls grows further. To
prevent this, defer the application of seccomp policies after the
initialisation phase, before the main loop, that's where we expect bad
things to happen, potentially. This way, we get back to 22 allowed
syscalls for passt and 34 for pasta, on x86_64.
While at it, move #syscalls notes to specific code paths wherever it
conceptually makes sense.
We have to open all the file handles we'll ever need before
sandboxing:
- the packet capture file can only be opened once, drop instance
numbers from the default path and use the (pre-sandbox) PID instead
- /proc/net/tcp{,v6} and /proc/net/udp{,v6}, for automatic detection
of bound ports in pasta mode, are now opened only once, before
sandboxing, and their handles are stored in the execution context
- the UNIX domain socket for passt is also bound only once, before
sandboxing: to reject clients after the first one, instead of
closing the listening socket, keep it open, accept and immediately
discard new connection if we already have a valid one
Clarify the (unchanged) behaviour for --netns-only in the man page.
To actually make passt and pasta processes run in a separate PID
namespace, we need to unshare(CLONE_NEWPID) before forking to
background (if configured to do so). Introduce a small daemon()
implementation, __daemon(), that additionally saves the PID file
before forking. While running in foreground, the process itself can't
move to a new PID namespace (a process can't change the notion of its
own PID): mention that in the man page.
For some reason, fork() in a detached PID namespace causes SIGTERM
and SIGQUIT to be ignored, even if the handler is still reported as
SIG_DFL: add a signal handler that just exits.
We can now drop most of the pasta_child_handler() implementation,
that took care of terminating all processes running in the same
namespace, if pasta started a shell: the shell itself is now the
init process in that namespace, and all children will terminate
once the init process exits.
Issuing 'echo $$' in a detached PID namespace won't return the
actual namespace PID as seen from the init namespace: adapt
demo and test setup scripts to reflect that.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-02-07 20:11:37 +00:00
|
|
|
* Return: 0
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
*/
|
2024-06-08 06:30:40 +00:00
|
|
|
/* cppcheck-suppress [constParameterCallback, unmatchedSuppression] */
|
2022-11-17 05:58:50 +00:00
|
|
|
static int tcp_ns_socks_init(void *arg)
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
{
|
2024-01-15 06:39:43 +00:00
|
|
|
const struct ctx *c = (const struct ctx *)arg;
|
2022-09-24 09:08:21 +00:00
|
|
|
unsigned port;
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
|
2021-09-29 14:11:06 +00:00
|
|
|
ns_enter(c);
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
|
2022-09-24 09:08:22 +00:00
|
|
|
for (port = 0; port < NUM_PORTS; port++) {
|
2022-09-24 09:08:17 +00:00
|
|
|
if (!bitmap_isset(c->tcp.fwd_out.map, port))
|
2021-08-12 13:42:43 +00:00
|
|
|
continue;
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
|
2022-11-17 05:58:50 +00:00
|
|
|
tcp_ns_sock_init(c, port);
|
passt: Add PASTA mode, major rework
PASTA (Pack A Subtle Tap Abstraction) provides quasi-native host
connectivity to an otherwise disconnected, unprivileged network
and user namespace, similarly to slirp4netns. Given that the
implementation is largely overlapping with PASST, no separate binary
is built: 'pasta' (and 'passt4netns' for clarity) both link to
'passt', and the mode of operation is selected depending on how the
binary is invoked. Usage example:
$ unshare -rUn
# echo $$
1871759
$ ./pasta 1871759 # From another terminal
# udhcpc -i pasta0 2>/dev/null
# ping -c1 pasta.pizza
PING pasta.pizza (64.190.62.111) 56(84) bytes of data.
64 bytes from 64.190.62.111 (64.190.62.111): icmp_seq=1 ttl=255 time=34.6 ms
--- pasta.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 34.575/34.575/34.575/0.000 ms
# ping -c1 spaghetti.pizza
PING spaghetti.pizza(2606:4700:3034::6815:147a (2606:4700:3034::6815:147a)) 56 data bytes
64 bytes from 2606:4700:3034::6815:147a (2606:4700:3034::6815:147a): icmp_seq=1 ttl=255 time=29.0 ms
--- spaghetti.pizza ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 28.967/28.967/28.967/0.000 ms
This entails a major rework, especially with regard to the storage of
tracked connections and to the semantics of epoll(7) references.
Indexing TCP and UDP bindings merely by socket proved to be
inflexible and unsuitable to handle different connection flows: pasta
also provides Layer-2 to Layer-2 socket mapping between init and a
separate namespace for local connections, using a pair of splice()
system calls for TCP, and a recvmmsg()/sendmmsg() pair for UDP local
bindings. For instance, building on the previous example:
# ip link set dev lo up
# iperf3 -s
$ iperf3 -c ::1 -Z -w 32M -l 1024k -P2 | tail -n4
[SUM] 0.00-10.00 sec 52.3 GBytes 44.9 Gbits/sec 283 sender
[SUM] 0.00-10.43 sec 52.3 GBytes 43.1 Gbits/sec receiver
iperf Done.
epoll(7) references now include a generic part in order to
demultiplex data to the relevant protocol handler, using 24
bits for the socket number, and an opaque portion reserved for
usage by the single protocol handlers, in order to track sockets
back to corresponding connections and bindings.
A number of fixes pertaining to TCP state machine and congestion
window handling are also included here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-07-17 06:34:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-02-13 23:48:19 +00:00
|
|
|
/**
|
|
|
|
* tcp_sock_refill_pool() - Refill one pool of pre-opened sockets
|
|
|
|
* @c: Execution context
|
|
|
|
* @pool: Pool of sockets to refill
|
|
|
|
* @af: Address family to use
|
2024-02-19 07:56:49 +00:00
|
|
|
*
|
|
|
|
* Return: 0 on success, negative error code if there was at least one error
|
2023-02-13 23:48:19 +00:00
|
|
|
*/
|
2024-02-19 07:56:49 +00:00
|
|
|
int tcp_sock_refill_pool(const struct ctx *c, int pool[], sa_family_t af)
|
2023-02-13 23:48:19 +00:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < TCP_SOCK_POOL_SIZE; i++) {
|
2024-02-19 07:56:51 +00:00
|
|
|
int fd;
|
|
|
|
|
2023-02-13 23:48:22 +00:00
|
|
|
if (pool[i] >= 0)
|
2024-02-19 07:56:47 +00:00
|
|
|
continue;
|
2023-02-13 23:48:19 +00:00
|
|
|
|
2024-02-19 07:56:51 +00:00
|
|
|
if ((fd = tcp_conn_new_sock(c, af)) < 0)
|
|
|
|
return fd;
|
|
|
|
|
|
|
|
pool[i] = fd;
|
2023-02-13 23:48:19 +00:00
|
|
|
}
|
2024-02-19 07:56:49 +00:00
|
|
|
|
|
|
|
return 0;
|
2023-02-13 23:48:19 +00:00
|
|
|
}
|
|
|
|
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
/**
|
2023-02-13 23:48:20 +00:00
|
|
|
* tcp_sock_refill_init() - Refill pools of pre-opened sockets in init ns
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
* @c: Execution context
|
|
|
|
*/
|
2023-02-13 23:48:20 +00:00
|
|
|
static void tcp_sock_refill_init(const struct ctx *c)
|
|
|
|
{
|
2024-02-19 07:56:49 +00:00
|
|
|
if (c->ifi4) {
|
|
|
|
int rc = tcp_sock_refill_pool(c, init_sock_pool4, AF_INET);
|
|
|
|
if (rc < 0)
|
|
|
|
warn("TCP: Error refilling IPv4 host socket pool: %s",
|
|
|
|
strerror(-rc));
|
|
|
|
}
|
|
|
|
if (c->ifi6) {
|
|
|
|
int rc = tcp_sock_refill_pool(c, init_sock_pool6, AF_INET6);
|
|
|
|
if (rc < 0)
|
|
|
|
warn("TCP: Error refilling IPv6 host socket pool: %s",
|
|
|
|
strerror(-rc));
|
|
|
|
}
|
2023-02-13 23:48:20 +00:00
|
|
|
}
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
2024-07-20 13:54:53 +00:00
|
|
|
/**
|
|
|
|
* tcp_probe_peek_offset_cap() - Check if SO_PEEK_OFF is supported by kernel
|
|
|
|
* @af: Address family, IPv4 or IPv6
|
|
|
|
*
|
|
|
|
* Return: true if supported, false otherwise
|
|
|
|
*/
|
2024-09-18 01:53:04 +00:00
|
|
|
static bool tcp_probe_peek_offset_cap(sa_family_t af)
|
2024-07-20 13:54:53 +00:00
|
|
|
{
|
|
|
|
bool ret = false;
|
|
|
|
int s, optv = 0;
|
|
|
|
|
|
|
|
s = socket(af, SOCK_STREAM | SOCK_CLOEXEC, IPPROTO_TCP);
|
|
|
|
if (s < 0) {
|
|
|
|
warn_perror("Temporary TCP socket creation failed");
|
|
|
|
} else {
|
|
|
|
if (!setsockopt(s, SOL_SOCKET, SO_PEEK_OFF, &optv, sizeof(int)))
|
|
|
|
ret = true;
|
|
|
|
close(s);
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2024-09-18 01:53:05 +00:00
|
|
|
/**
|
2024-10-24 04:59:21 +00:00
|
|
|
* tcp_probe_tcp_info() - Check what data TCP_INFO reports
|
2024-09-18 01:53:05 +00:00
|
|
|
*
|
2024-10-24 04:59:21 +00:00
|
|
|
* Return: Number of bytes returned by TCP_INFO getsockopt()
|
2024-09-18 01:53:05 +00:00
|
|
|
*/
|
2024-10-24 04:59:21 +00:00
|
|
|
static socklen_t tcp_probe_tcp_info(void)
|
2024-09-18 01:53:05 +00:00
|
|
|
{
|
tcp: Remove compile-time dependency on struct tcp_info version
In the Makefile we probe to create several defines based on the presence
of particular fields in struct tcp_info. These defines are used for two
purposes, neither of which they accomplish well:
1) Determining if the tcp_info fields are available at runtime. For this
purpose the defines are Just Plain Wrong, since the runtime kernel may
not be the same as the compile time kernel. We corrected this for
tcp_snd_wnd, but not for tcpi_bytes_acked or tcpi_min_rtt
2) Allowing the source to compile against older kernel headers which don't
have the fields in question. This works in theory, but it does mean
we won't be able to use the fields, even if later run against a
newer kernel. Furthermore, it's quite fragile: without much more
thorough tests of builds in different environments that we're currently
set up for, it's very easy to miss cases where we're accessing a field
without protection from an #ifdef. For example we currently access
tcpi_snd_wnd without #ifdefs in tcp_update_seqack_wnd().
Improve this with a different approach, borrowed from qemu (which has many
instances of similar problems). Don't compile against linux/tcp.h, using
netinet/tcp.h instead. Then for when we need an extension field, define
a struct tcp_info_linux, copied from the kernel, with all the fields we're
interested in. That may need updating from future kernel versions, but
only when we want to use a new extension, so it shouldn't be frequent.
This allows us to remove the HAS_SND_WND define entirely. We keep
HAS_BYTES_ACKED and HAS_MIN_RTT now, since they're used for purpose (1),
we'll fix that in a later patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Trivial grammar fixes in comments]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2024-10-24 04:59:20 +00:00
|
|
|
struct tcp_info_linux tinfo;
|
2024-09-18 01:53:05 +00:00
|
|
|
socklen_t sl = sizeof(tinfo);
|
|
|
|
int s;
|
|
|
|
|
|
|
|
s = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, IPPROTO_TCP);
|
|
|
|
if (s < 0) {
|
|
|
|
warn_perror("Temporary TCP socket creation failed");
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (getsockopt(s, SOL_TCP, TCP_INFO, &tinfo, &sl)) {
|
|
|
|
warn_perror("Failed to get TCP_INFO on temporary socket");
|
|
|
|
close(s);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
close(s);
|
|
|
|
|
2024-10-24 04:59:21 +00:00
|
|
|
return sl;
|
2024-09-18 01:53:05 +00:00
|
|
|
}
|
|
|
|
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
/**
|
2022-05-01 04:36:34 +00:00
|
|
|
* tcp_init() - Get initial sequence, hash secret, initialise per-socket data
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* @c: Execution context
|
|
|
|
*
|
2022-05-01 04:36:34 +00:00
|
|
|
* Return: 0, doesn't return on failure
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*/
|
2022-05-01 04:36:34 +00:00
|
|
|
int tcp_init(struct ctx *c)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
2024-07-17 00:36:02 +00:00
|
|
|
ASSERT(!c->no_tcp);
|
|
|
|
|
2024-10-29 02:14:00 +00:00
|
|
|
tcp_sock_iov_init(c);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
|
2021-10-15 18:42:11 +00:00
|
|
|
memset(init_sock_pool4, 0xff, sizeof(init_sock_pool4));
|
|
|
|
memset(init_sock_pool6, 0xff, sizeof(init_sock_pool6));
|
|
|
|
memset(tcp_sock_init_ext, 0xff, sizeof(tcp_sock_init_ext));
|
|
|
|
memset(tcp_sock_ns, 0xff, sizeof(tcp_sock_ns));
|
|
|
|
|
2023-02-13 23:48:20 +00:00
|
|
|
tcp_sock_refill_init(c);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
|
|
|
|
if (c->mode == MODE_PASTA) {
|
2022-03-15 00:07:02 +00:00
|
|
|
tcp_splice_init(c);
|
|
|
|
|
2022-11-17 05:58:50 +00:00
|
|
|
NS_CALL(tcp_ns_socks_init, c);
|
tcp: Rework window handling, timers, add SO_RCVLOWAT and pools for sockets/pipes
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-09-19 00:29:05 +00:00
|
|
|
}
|
|
|
|
|
2024-07-20 13:54:53 +00:00
|
|
|
peek_offset_cap = (!c->ifi4 || tcp_probe_peek_offset_cap(AF_INET)) &&
|
|
|
|
(!c->ifi6 || tcp_probe_peek_offset_cap(AF_INET6));
|
2024-07-24 12:49:00 +00:00
|
|
|
debug("SO_PEEK_OFF%ssupported", peek_offset_cap ? " " : " not ");
|
2024-07-12 19:04:49 +00:00
|
|
|
|
2024-10-24 04:59:21 +00:00
|
|
|
tcp_info_size = tcp_probe_tcp_info();
|
|
|
|
|
|
|
|
#define dbg_tcpi(f_) debug("TCP_INFO tcpi_%s field%s supported", \
|
|
|
|
STRINGIFY(f_), tcp_info_cap(f_) ? " " : " not ")
|
|
|
|
dbg_tcpi(snd_wnd);
|
2024-10-24 04:59:22 +00:00
|
|
|
dbg_tcpi(bytes_acked);
|
|
|
|
dbg_tcpi(min_rtt);
|
2024-10-24 04:59:21 +00:00
|
|
|
#undef dbg_tcpi
|
2024-09-18 01:53:05 +00:00
|
|
|
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-11-15 05:25:32 +00:00
|
|
|
/**
|
2023-11-15 05:25:33 +00:00
|
|
|
* tcp_port_rebind() - Rebind ports to match forward maps
|
2023-11-15 05:25:32 +00:00
|
|
|
* @c: Execution context
|
|
|
|
* @outbound: True to remap outbound forwards, otherwise inbound
|
|
|
|
*
|
|
|
|
* Must be called in namespace context if @outbound is true.
|
|
|
|
*/
|
2023-11-15 05:25:33 +00:00
|
|
|
static void tcp_port_rebind(struct ctx *c, bool outbound)
|
2023-11-15 05:25:32 +00:00
|
|
|
{
|
|
|
|
const uint8_t *fmap = outbound ? c->tcp.fwd_out.map : c->tcp.fwd_in.map;
|
|
|
|
const uint8_t *rmap = outbound ? c->tcp.fwd_in.map : c->tcp.fwd_out.map;
|
|
|
|
int (*socks)[IP_VERSIONS] = outbound ? tcp_sock_ns : tcp_sock_init_ext;
|
|
|
|
unsigned port;
|
|
|
|
|
|
|
|
for (port = 0; port < NUM_PORTS; port++) {
|
|
|
|
if (!bitmap_isset(fmap, port)) {
|
|
|
|
if (socks[port][V4] >= 0) {
|
|
|
|
close(socks[port][V4]);
|
|
|
|
socks[port][V4] = -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (socks[port][V6] >= 0) {
|
|
|
|
close(socks[port][V6]);
|
|
|
|
socks[port][V6] = -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Don't loop back our own ports */
|
|
|
|
if (bitmap_isset(rmap, port))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if ((c->ifi4 && socks[port][V4] == -1) ||
|
|
|
|
(c->ifi6 && socks[port][V6] == -1)) {
|
|
|
|
if (outbound)
|
|
|
|
tcp_ns_sock_init(c, port);
|
|
|
|
else
|
2024-09-20 04:12:43 +00:00
|
|
|
tcp_sock_init(c, NULL, NULL, port);
|
2023-11-15 05:25:32 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-27 03:24:30 +00:00
|
|
|
/**
|
2023-11-15 05:25:33 +00:00
|
|
|
* tcp_port_rebind_outbound() - Rebind ports in namespace
|
|
|
|
* @arg: Execution context
|
|
|
|
*
|
|
|
|
* Called with NS_CALL()
|
2021-09-27 03:24:30 +00:00
|
|
|
*
|
|
|
|
* Return: 0
|
|
|
|
*/
|
2023-11-15 05:25:33 +00:00
|
|
|
static int tcp_port_rebind_outbound(void *arg)
|
2021-09-27 03:24:30 +00:00
|
|
|
{
|
2023-11-15 05:25:33 +00:00
|
|
|
struct ctx *c = (struct ctx *)arg;
|
2021-09-27 03:24:30 +00:00
|
|
|
|
2023-11-15 05:25:33 +00:00
|
|
|
ns_enter(c);
|
|
|
|
tcp_port_rebind(c, true);
|
2021-09-27 03:24:30 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
/**
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
* tcp_timer() - Periodic tasks: port detection, closed connections, pool refill
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
* @c: Execution context
|
2024-01-16 00:50:32 +00:00
|
|
|
* @now: Current timestamp
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
*/
|
2024-01-16 00:50:32 +00:00
|
|
|
void tcp_timer(struct ctx *c, const struct timespec *now)
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
{
|
2024-01-16 00:50:32 +00:00
|
|
|
(void)now;
|
2021-09-27 03:24:30 +00:00
|
|
|
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
if (c->mode == MODE_PASTA) {
|
2023-03-20 18:10:34 +00:00
|
|
|
if (c->tcp.fwd_out.mode == FWD_AUTO) {
|
2024-02-28 11:25:20 +00:00
|
|
|
fwd_scan_ports_tcp(&c->tcp.fwd_out, &c->tcp.fwd_in);
|
2023-11-15 05:25:33 +00:00
|
|
|
NS_CALL(tcp_port_rebind_outbound, c);
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
}
|
2021-09-27 03:24:30 +00:00
|
|
|
|
2023-03-20 18:10:34 +00:00
|
|
|
if (c->tcp.fwd_in.mode == FWD_AUTO) {
|
2024-02-28 11:25:20 +00:00
|
|
|
fwd_scan_ports_tcp(&c->tcp.fwd_in, &c->tcp.fwd_out);
|
2023-11-15 05:25:33 +00:00
|
|
|
tcp_port_rebind(c, false);
|
2021-09-27 03:24:30 +00:00
|
|
|
}
|
treewide: Packet abstraction with mandatory boundary checks
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2022-03-25 12:02:47 +00:00
|
|
|
}
|
2022-03-15 00:07:02 +00:00
|
|
|
|
2023-02-13 23:48:20 +00:00
|
|
|
tcp_sock_refill_init(c);
|
2023-02-13 23:48:21 +00:00
|
|
|
if (c->mode == MODE_PASTA)
|
|
|
|
tcp_splice_refill(c);
|
passt: New design and implementation with native Layer 4 sockets
This is a reimplementation, partially building on the earlier draft,
that uses L4 sockets (SOCK_DGRAM, SOCK_STREAM) instead of SOCK_RAW,
providing L4-L2 translation functionality without requiring any
security capability.
Conceptually, this follows the design presented at:
https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/Networking.md
The most significant novelty here comes from TCP and UDP translation
layers. In particular, the TCP state and translation logic follows
the intent of being minimalistic, without reimplementing a full TCP
stack in either direction, and synchronising as much as possible the
TCP dynamic and flows between guest and host kernel.
Another important introduction concerns addressing, port translation
and forwarding. The Layer 4 implementations now attempt to bind on
all unbound ports, in order to forward connections in a transparent
way.
While at it:
- the qemu 'tap' back-end can't be used as-is by qrap anymore,
because of explicit checks now introduced in qemu to ensure that
the corresponding file descriptor is actually a tap device. For
this reason, qrap now operates on a 'socket' back-end type,
accounting for and building the additional header reporting
frame length
- provide a demo script that sets up namespaces, addresses and
routes, and starts the daemon. A virtual machine started in the
network namespace, wrapped by qrap, will now directly interface
with passt and communicate using Layer 4 sockets provided by the
host kernel.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2021-02-16 06:25:09 +00:00
|
|
|
}
|