1

On a system with an existing multi-stage qdisc setup, we need to introduce extra latency (at least fixed, but fixed with a small variation would be a nice option to have).

The canonical way to do this on Linux is to use the netem qdisc. However, this cannot work here because netem does not work with other qdiscs (this is a well-documented limitation, and one which a coworker has verified himself).

Putting a VM onto the machine that does nothing but netem not only seems like overkill but will also massively complicate routing and WLAN configuration so that’s a step I would prefer not to take.

In case it’s relevant, the current setup is a combination of htb (used to limit bandwidth only… we probably should have used tbf instead but this is what we had when I joined) and fq_codel for ECN marking, both with custom patches. I’m not averse to patching this into either…

As requested, here is a sample setup, using stock htb/fq_codel for easier testing:

#!/bin/mksh
set -ex
dev=eth0
rate=1000
sudo tc qdisc add dev $dev root handle 1: htb default 1
sudo tc class add dev $dev parent 1: classid 1:1 htb rate ${rate}kbit ceil ${rate}kbit prio 1
sudo tc qdisc add dev $dev parent 1:1 handle 2: fq_codel
mirabilos
  • 1,723
  • 15
  • 33
  • Could you point to the well-documented limitation that netem does not work with other qdiscs? Also if you could show some of your "code" (ie: the tc hierarchy or commands), it would be easier to validate if an idea is working or not. – A.B Mar 02 '22 at 19:31
  • 1
    @A.B added inline (link, but a coworker has seen that problem himself; sample setup code modified to use stock qdisc modules so you can test things easier) – mirabilos Mar 02 '22 at 22:57

0 Answers0