dave14305 you are right about jumping the gun - my enthusiasm got the better of me. I have been reading about SQM implementations in OpenWRT involving shaping along the lines discussed in this thread and configuring appropriate interface(s) in the context of VPN use.
For Asus-Merlin I have struggled to find the right information on just changing the interface for QoS application - is this documented anywhere? The only thing I could find was the post from RMerlin involving scripting as part of testing CAKE for beta testers and I don't even know if that is still applicable.
I am open minded - as you say CAKE may not be the best for me. I have a 4G LTE connection that gives around 50-60 download and 20-25 upload with a latency of around 55ms and unloaded jitter about 5ms. Impact of VPN (NordVPN) seems pretty minimal from the testing I have done, which I presume is because of my relatively poor connection. I need to use a VPN because my ISP (well, mobile phone provider) heavily and consistently throttles to 10Mbit/s on port 443 (VPN circumvents this) and I am keen to keep Microsoft Teams and Zoom calls working as well as possible despite background downloads. Since I am especially bothered about Teams and Zoom the adaptive QoS classification may make sense. I realise that my connection presents rather a challenge in terms of QoS - the bandwidth admittedly varies somewhat (especially on download), albeit the unloaded latency is pretty consistent.
So far all I have done is implement CAKE in accordance with your suggestion:
/tmp/qos stop
tc qdisc replace dev tun11 root cake bandwidth 5Mbit
tc qdisc replace dev br0 root cake bandwidth 5Mbit
Except that I have changed this to 30/20 (as a very crude initial increase from the 5/5 level). Overhead is not being properly taken into account, but I understand that has a pretty minimal effect in terms of the bandwidth calculation used in CAKE anyway. The overhead stuff seems a little academic given that most users just converge upon limits by trial and error.
What I can say is that the application of CAKE in the way you have suggested above seems to work in terms of my latency not spiking upon downloads, whereas on 'eth0' it did not appear to work. And I understand that is because CAKE is operating blind.
As you have asked for
dave14305, here is the output from the commands:
[email protected]:/tmp/home/root# tc -s qdisc show dev tun11
Code:
qdisc cake 8003: root refcnt 2 bandwidth 20Mbit diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead 0
Sent 1010237661 bytes 2063238 pkt (dropped 3461, overlimits 2193762 requeues 0)
backlog 0b 0p requeues 0
memory used: 668160b of 4Mb
capacity estimate: 5Mbit
min/max network layer size: 29 / 1408
min/max overhead-adjusted size: 29 / 1408
average network hdr offset: 0
Bulk Best Effort Voice
thresh 1250Kbit 20Mbit 5Mbit
target 14.4ms 5ms 5ms
interval 109ms 100ms 100ms
pk_delay 0us 770us 1.11ms
av_delay 0us 35us 159us
sp_delay 0us 3us 4us
backlog 0b 0b 0b
pkts 0 2066202 497
bytes 0 1014820004 77366
way_inds 0 18588 31
way_miss 0 20576 58
way_cols 0 0 0
drops 0 3461 0
marks 0 1 0
ack_drop 0 0 0
sp_flows 0 0 2
bk_flows 0 1 0
un_flows 0 0 0
max_len 0 1408 576
quantum 300 610 300
[email protected]:/tmp/home/root# tc -s qdisc show dev br0
Code:
qdisc cake 8004: root refcnt 2 bandwidth 30Mbit diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead 0
Sent 2765966059 bytes 2873292 pkt (dropped 98902, overlimits 3814963 requeues 0)
backlog 0b 0p requeues 0
memory used: 1431040b of 4Mb
capacity estimate: 5Mbit
min/max network layer size: 42 / 1514
min/max overhead-adjusted size: 42 / 1514
average network hdr offset: 14
Bulk Best Effort Voice
thresh 1875Kbit 30Mbit 7500Kbit
target 9.69ms 5ms 5ms
interval 105ms 100ms 100ms
pk_delay 537us 2.41ms 227us
av_delay 30us 229us 25us
sp_delay 1us 0us 2us
backlog 0b 0b 0b
pkts 1400 2864113 106681
bytes 167097 2792627527 109345457
way_inds 0 47220 0
way_miss 55 24824 49
way_cols 0 0 0
drops 0 98882 20
marks 0 10 0
ack_drop 0 0 0
sp_flows 0 1 0
bk_flows 0 1 1
un_flows 0 0 0
max_len 1379 7570 34010
quantum 300 915 300
I also repeated above for 5/5:
[email protected]:/tmp/home/root# tc -s qdisc show dev tun11
Code:
qdisc cake 8009: root refcnt 2 bandwidth 5Mbit diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead 0
Sent 407110198 bytes 893164 pkt (dropped 2163, overlimits 1083153 requeues 0)
backlog 0b 0p requeues 0
memory used: 830Kb of 4Mb
capacity estimate: 20Mbit
min/max network layer size: 40 / 1378
min/max overhead-adjusted size: 40 / 1378
average network hdr offset: 0
Bulk Best Effort Voice
thresh 312496bit 5Mbit 1250Kbit
target 57.6ms 5ms 14.4ms
interval 153ms 100ms 109ms
pk_delay 0us 1.84ms 1.34ms
av_delay 0us 112us 158us
sp_delay 0us 1us 10us
backlog 0b 0b 0b
pkts 0 895183 144
bytes 0 410035450 12425
way_inds 0 10308 0
way_miss 0 1234 12
way_cols 0 0 0
drops 0 2163 0
marks 0 3 0
ack_drop 0 0 0
sp_flows 0 1 1
bk_flows 0 1 0
un_flows 0 0 0
max_len 0 1378 576
quantum 300 300 300
[email protected]:/tmp/home/root# tc -s qdisc show dev br0
Code:
qdisc cake 8004: root refcnt 2 bandwidth 5Mbit diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead 0
Sent 4082401638 bytes 4095713 pkt (dropped 146691, overlimits 5464741 requeues 0)
backlog 0b 0p requeues 0
memory used: 1431040b of 4Mb
capacity estimate: 5Mbit
min/max network layer size: 42 / 1514
min/max overhead-adjusted size: 42 / 1514
average network hdr offset: 14
Bulk Best Effort Voice
thresh 312496bit 5Mbit 1250Kbit
target 58.1ms 5ms 14.5ms
interval 153ms 100ms 110ms
pk_delay 375us 8.77ms 2.3ms
av_delay 29us 6.33ms 193us
sp_delay 2us 2us 20us
backlog 0b 0b 0b
pkts 1576 4132619 108209
bytes 183085 4174802673 109445618
way_inds 0 51839 0
way_miss 68 28699 58
way_cols 0 0 0
drops 0 146671 20
marks 0 11 0
ack_drop 0 0 0
sp_flows 1 0 1
bk_flows 0 1 0
un_flows 0 0 0
max_len 1379 7570 34010
quantum 300 300 300
I wonder what this reveals?
On 5/5 here is a high-resolution dslreports.com/speedtest (with settings as recommended by the OpenWRT guys: high-resolution buffer bloat and 30 seconds ul/dl):
I tested my connection at DSLReports.com/speedtest ..
www.dslreports.com
and here is the waveform bufferbloat test result:
View the full results, and test your own bufferbloat
www.waveform.com
Regarding:
Maybe fq_codel on the tun interfaces (for flow fairness) and a simple Token Bucket filter on the WAN interface (for shaping).
I would be very eager to try this out.