1
0
mirror of https://passt.top/passt synced 2024-10-02 03:55:48 +00:00
passt/test
David Gibson f9ff6678d4 test/perf: Get iperf3 stats from client side
iperf3 generates statistics about its run on both the client and server
sides.  They don't have exactly the same information, but both have the
pieces we need (AFAICT the server communicates some nformation to the
client over the control socket, so the most important information is in the
client side output, even if measured by the server).

Currently we use the server side information for our measurements. Using
the client side information has several advantages though:

 * We can directly wait for the client to complete and we know we'll have
   the output we want.  We don't need to sleep to give the server time to
   write out the results.
 * That in turn means we can wrap up as soon as the client is done, we
   don't need to wait overlong to make sure everything is finished.
 * The slightly different organisation of the data in the client output
   means that we always want the same json value, rather than requiring
   slightly different onces for UDP and TCP.

The fact that we avoid some extra delays speeds up the overal run of the
perf tests by around 7 minutes (out of around 35 minutes) on my laptop.

The fact that we no longer unconditionally kill client and server after
a certain time means that the client could run indefinitely if the server
doesn't respond.  We mitigate that by setting 1s connect timeout on the
client.  This isn't foolproof - if we get an initial response, but then
lose connectivity this could still run indefinitely, however it does cover
by far the most likely failure cases.  --snd-timeout would provide more
robustness, but I've hit odd failures when trying to use it.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
2023-11-07 09:56:06 +01:00
..
build passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
demo passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
distro passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
env passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
lib test/perf: Get iperf3 stats from client side 2023-11-07 09:56:06 +01:00
memory passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
passt passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
passt_in_ns passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
pasta passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
pasta_options passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
pasta_podman test: Add Podman system test with bats for pasta 2023-09-07 11:25:41 +02:00
perf test/perf: Remove stale iperf3c/iperf3s directives 2023-11-07 09:56:03 +01:00
two_guests passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
.gitignore nstool: Rename nsholder to nstool 2023-04-08 01:11:41 +02:00
ci test: Add CI/demo scripts 2021-09-27 15:10:35 +02:00
find-arm64-firmware.sh tests: Search multiple places for aarch64 EDK2 bios image 2022-07-14 01:32:42 +02:00
Makefile nstool: Rename nsholder to nstool 2023-04-08 01:11:41 +02:00
nstool.c test/nstool: Fix fd leak in accept() loop 2023-05-23 17:06:32 +02:00
passt.mbuto passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
passt.mem.mbuto passt: Relicense to GPL 2.0, or any later version 2023-04-06 18:00:33 +02:00
prepare-distro-img.sh Makefile: Ugly hack to get a "plain" Markdown version of README 2022-08-20 19:07:12 +02:00
README.md test: Add Podman system test with bats for pasta 2023-09-07 11:25:41 +02:00
run test: Add Podman system test with bats for pasta 2023-09-07 11:25:41 +02:00
run_demo test: Add CI/demo scripts 2021-09-27 15:10:35 +02:00
valgrind.supp test, seccomp, Makefile: Switch to valgrind runs for passt functional tests 2022-03-29 15:35:38 +02:00

Scope

This directory contains test cases for passt and pasta and a simple POSIX shell-based framework to define them, and run them as a suite.

These tests can be run as part of a continuous integration workflow, and are also used to provide short usage demos, with video recording, for passt and pasta basic use cases.

Run

Dependencies

Packages

The tests require some package dependencies commonly available in Linux distributions. If some packages are not available, the test groups that need them will be selectively skipped.

This is a non-exhaustive list of packages that might not commonly be installed on a system, i.e. common utilities such as a shell are not included here.

Example for Debian, and possibly most Debian-based distributions:

build-essential git jq strace iperf3 qemu-system-x86 tmux sipcalc bats bc
catatonit clang-tidy cppcheck go isc-dhcp-common psmisc linux-cpupower socat
netcat-openbsd fakeroot lz4 lm-sensors qemu-system-arm qemu-system-ppc
qemu-system-misc qemu-system-x86 valgrind

NOTE: the tests need a qemu version >= 7.2, or one that contains commit 13c6be96618c ("net: stream: add unix socket"): this change introduces support for UNIX domain sockets as network device back-end, which qemu uses to connect to passt.

Other tools

Test measuring request-response and connect-request-response latencies use neper, which is not commonly packaged by distributions and needs to be built and installed manually:

git clone https://github.com/google/neper
cd neper; make
cp tcp_crr tcp_rr udp_rr /usr/local/bin

Virtual machine images are built during test executions using mbuto, the shell script is sourced via git as needed, so there's no need to actually install it.

Kernel parameters

Performance tests use iperf3 with rather large TCP receiving and sending windows, to decrease the likelihood of iperf3 itself becoming the bottleneck. These values need to be allowed by the kernel of the host running the tests. Example for /etc/sysctl.conf:

net.core.rmem_max = 134217728 net.core.wmem_max = 134217728

Further, the passt demo uses perf(1), relying on hardware events for performance counters, to display syscall overhead. The kernel needs to allow unprivileged users to access these events. Suggested entry for /etc/sysctl.conf:

kernel.perf_event_paranoid = -1

Special requirements for continuous integration and demo modes

Running the test suite as continuous integration or demo modes will record the terminal with the steps being executed, using asciinema(1), and create binary packages.

The following additional packages are commonly needed:

alien asciinema linux-perf tshark

Regular test

Just issue:

./run

from the test directory. Elevated privileges are not needed. Environment variable settings: DEBUG=1 enables debugging messages, TRACE=1 enables tracing (further debugging messages), PCAP=1 enables packet captures. Example:

PCAP=1 TRACE=1 ./run

Running selected tests

Rudimentary support to run a list of selected tests, without support for dependencies, is available. Tests need to have a setup function corresponding to their path. For example:

./run passt/ndp passt/dhcp pasta/ndp

will call the 'passt' setup function (from lib/setup), run the two corresponding tests, call the 'passt' teardown function, the 'pasta' setup, run the pasta/ndp test, and finally tear down the 'pasta' setup.

Note that requirements on steps implemented by related tests are not handled. For example, if the 'passt/tcp' needs guest connectivity set up by the 'passt/ndp' and 'passt/dhcp' tests, those need to be listed explicitly.

Continuous integration

Issuing:

./ci

will run the whole test suite while recording the execution, and it will also build JavaScript fragments used on http://passt.top/ for performance data tables and links to specific offsets in the captures.

Demo mode

Issuing:

./demo

will run the demo cases under demo, with terminal captures as well.

Framework

The implementation of the testing framework is under lib, and it provides facilities for terminal and tmux session management, interpretation of test directives, video recording, and suchlike. Test cases are organised in the remaining directories.

Test cases can be implemented as POSIX shell scripts, or as a set of directives, which are not formally documented here, but should be clear enough from the existing cases. The entry point for interpretation of test directives is implemented in lib/test.