1
0
mirror of https://passt.top/passt synced 2024-06-30 15:02:40 +00:00

tcp: Remove broken pressure calculations for tcp_defer_handler()

tcp_defer_handler() performs a potentially expensive linear scan of the
connection table.  So, to mitigate the cost of that we skip if if we're not
under at least moderate pressure: either 30% of available connections or
30% (estimated) of available fds used.

But, the calculation for this has been broken since it was introduced: we
calculate "max_conns" based on c->tcp.conn_count, not TCP_MAX_CONNS,
meaning we only exit early if conn_count is less than 30% of itself, i.e.
never.

If that calculation is "corrected" to be based on TCP_MAX_CONNS, it
completely tanks the TCP CRR times for passt - from ~60ms to >1000ms on my
laptop.  My guess is that this is because in the case of many short lived
connections, we're letting the table become much fuller before compacting
it.  That means that other places which perform a table scan now have to
do much, much more.

For the time being, simply remove the tests, since they're not doing
anything useful.  We can reintroduce them more carefully if we see a need
for them.

This also removes the only user of c->tcp.splice_conn_count, so that can
be removed as well.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
This commit is contained in:
David Gibson 2023-08-22 15:30:00 +10:00 committed by Stefano Brivio
parent eb8fbdbfd0
commit 69303cafbe
3 changed files with 0 additions and 13 deletions

9
tcp.c
View File

@ -309,9 +309,6 @@
#define TCP_FRAMES \
(c->mode == MODE_PASST ? TCP_FRAMES_MEM : 1)
#define TCP_FILE_PRESSURE 30 /* % of c->nofile */
#define TCP_CONN_PRESSURE 30 /* % of c->tcp.conn_count */
#define TCP_HASH_TABLE_LOAD 70 /* % */
#define TCP_HASH_TABLE_SIZE (TCP_MAX_CONNS * 100 / \
TCP_HASH_TABLE_LOAD)
@ -1385,17 +1382,11 @@ static void tcp_l2_data_buf_flush(struct ctx *c)
*/
void tcp_defer_handler(struct ctx *c)
{
int max_conns = c->tcp.conn_count / 100 * TCP_CONN_PRESSURE;
int max_files = c->nofile / 100 * TCP_FILE_PRESSURE;
union tcp_conn *conn;
tcp_l2_flags_buf_flush(c);
tcp_l2_data_buf_flush(c);
if ((c->tcp.conn_count < MIN(max_files, max_conns)) &&
(c->tcp.splice_conn_count < MIN(max_files / 6, max_conns)))
return;
for (conn = tc + c->tcp.conn_count - 1; conn >= tc; conn--) {
if (conn->c.spliced) {
if (conn->splice.flags & CLOSING)

2
tcp.h
View File

@ -56,7 +56,6 @@ union tcp_listen_epoll_ref {
* struct tcp_ctx - Execution context for TCP routines
* @hash_secret: 128-bit secret for hash functions, ISN and hash table
* @conn_count: Count of total connections in connection table
* @splice_conn_count: Count of spliced connections in connection table
* @port_to_tap: Ports bound host-side, packets to tap or spliced
* @fwd_in: Port forwarding configuration for inbound packets
* @fwd_out: Port forwarding configuration for outbound packets
@ -67,7 +66,6 @@ union tcp_listen_epoll_ref {
struct tcp_ctx {
uint64_t hash_secret[2];
int conn_count;
int splice_conn_count;
struct port_fwd fwd_in;
struct port_fwd fwd_out;
struct timespec timer_run;

View File

@ -295,7 +295,6 @@ void tcp_splice_destroy(struct ctx *c, union tcp_conn *conn_union)
conn->flags = 0;
debug("TCP (spliced): index %li, CLOSED", CONN_IDX(conn));
c->tcp.splice_conn_count--;
tcp_table_compact(c, conn_union);
}
@ -513,7 +512,6 @@ bool tcp_splice_conn_from_sock(struct ctx *c, union tcp_listen_epoll_ref ref,
trace("TCP (spliced): failed to set TCP_QUICKACK on %i", s);
conn->c.spliced = true;
c->tcp.splice_conn_count++;
conn->a = s;
if (tcp_splice_new(c, conn, ref.port, ref.ns))