So far we have covered what VPP is, and why its interesting to us.

Part of the story with any new service/implementation always centres around testing. How do you prove, definitively, that something does what it says on the tin. RFC2544 outlines a series of testing strategies and for the purpose of this work we try to keep it simple.

I have deployed a TRex traffic generator on Debian 11 (OFED 5.7-1 doesn’t build on deb12) with Mellanox 2x100G ConnectX5 CDAT cards with Trex v3.04. There are really nice instructions here for that. Happily ignore the CentOS7 religious texts - Debian is fine. Death to DeadRat.

Before we could test a router, we have to test the tester of course. Again, I think its important to give thanks and credit to Pim and Michal for their previous work on making the trex-loadtest script and the ruby rendering tool.

According to the instructions, we should expect line rate everywhere except 64b (and since imix uses 64b in places, it should impact that too):

For this first test then, we will run a unidirectional flow to see what is the absolute max performance for a single flow of traffic over this Mellanox NIC.

mlx5-throughput-stats

So here, we have my self test, with a back to back 100g DAC:

trex-selftest-1514b

Line rate all day long. Official stats were 8.172Mpps with no loss.

trex-selftest-imix

Interesting. Also line rate? Official stats were 32.697Mpps with no loss.

trex-selftest-64b

So close! Peak number was 133.145Mpps (89.47% of line rate), but we started to get >0.1% loss after 132.526Mpps. This aligns with the numberss in the original appendix for the CX5 card, and so we can call that “a success”.

Therefore, we can now run any test on a downstream “Device under test (DUT)” and know that any drop in recieved traffic is down to that device and not the tester itself.