4

I configured a OpenVSwitch-based topology using Mininet on my local machine:

h1 (server) -------(eth1) s1 (eth2)--------------- h2 (client)
                         (eth3)
                          |
                          |
                         h3 (client)

All links are configured to have 100Mbps bandwidth + 1ms delay, using the following qdisc/class:

qdisc htb 5: dev s1-eth1 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 ver 3.17 direct_qlen 1000 
 Sent 15134395731 bytes 9999886 pkt (dropped 467, overlimits 1558696 requeues 0)                        
 backlog 0b 0p requeues 0                                                                                
qdisc netem 10: dev s1-eth1 parent 5:1 limit 1000 delay 1.0ms                                            
 Sent 15134395731 bytes 9999886 pkt (dropped 467, overlimits 0 requeues 0)                              
 backlog 0b 0p requeues 0

class htb 5:1 root leaf 10: prio quantum rate 100Mbit ceil 100Mbit linklayer ethernet burst 1600b/1 mpu 0b cburst 1600b/1 mpu 0b level 0                                                                         
 Sent 15134395731 bytes 9999886 pkt (dropped 0, overlimits 0 requeues 0)                                 
 rate 0bit 0pps                     
 rate 0bit 0ppsbacklog 0b 0p requeues 0                                                                  
 lended: 1123299 borrowed: 0 giants: 0                                                                   
 tokens: 1917 ctokens: 1917         

class netem 10:1 parent 10: leaf 8007:

I started 2 iperf3 servers on h1, and 2 clients on h2 and h3 respectively, and everything worked as expected.

Then, to ensure QoS, and given that the existing 2 qdiscs are set by Mininet and cannot be changed, I added a new pfifo_fast with netem as parent:

qdisc pfifo_fast 8007: dev s1-eth1 parent 10:1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 
 Sent 6278984495 bytes 4149415 pkt (dropped 467, overlimits 0 requeues 0)                                
 backlog 0b 0p requeues 0

It worked well at first, but after several seconds, s1-eth1 began to drop all outgoing packets. tc reports as follows:

qdisc htb 5: dev s1-eth1 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 direct_qlen 1000           
 Sent 14716440349 bytes 9723342 pkt (dropped 387, overlimits 1494731 requeues 0)                         
 backlog 641479b 332p requeues 0                                                                         
qdisc netem 10: dev s1-eth1 parent 5:1 limit 1000 delay 1.0ms                                            
 Sent 14716440349 bytes 9723342 pkt (dropped 387, overlimits 0 requeues 0)                               
 backlog 0b 0p requeues 0                                                                                
qdisc pfifo_fast 8007: dev s1-eth1 parent 10:1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1          
 Sent 5861029113 bytes 3872871 pkt (dropped 387, overlimits 0 requeues 0)                                
 backlog 641479b 332p requeues 0

mininet> sh tc -s -d class show dev s1-eth1                                                              
class htb 5:1 root leaf 10: prio quantum rate 100Mbit ceil 100Mbit linklayer ethernet burst 15337b/1 mpu 
0b cburst 1600b/1 mpu 0b level 0 
 Sent 14717084878 bytes 9723917 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 0bit 0pps 
 rate 0bit 0ppsbacklog 0b 0p requeues 0
 lended: 1087545 borrowed: 0 giants: 0
 tokens: 19104 ctokens: 1917

Does my configuration look sensible? If not, what is the proper way to configure bandwidth limiting + simulate delay + QoS using tc?

lz96
  • 163
  • 6

0 Answers0