What's new

DSLReports Speed Test Now Measuring Buffer Bloat

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Might be useful if people posted their router and firmware version when reporting results.

It seems that my RT-N66 may not have the CPU horsepower to manage the QoS settings to improve the bufferbloat score.
 
544967.png


500/500 Mbit fiber 2 homr in denmark. RJ45 breakout straigt in the AC66u

Asus AC66U running - Running: 3.0.0.4.378_4850-g727db45
 
Last edited:
would decreasing the wmem and rmem default and max values have any negative impact on local inter-device wi-fi performance? I have a NAS with 1080p media files connected to the AC87 and got an 802.11ac device from which I play these via ftp and I'm sometimes getting some artifacts during playback of that content- I didn't get that many before I adjusted the wmem and rmem values, so I'm wondering if this could be related. It could also just be the source material so that the artifacts might also be completely unrelated; I'd need to perform some testing to find that out. But in any case, it would be interesting to know if decreasing the values might have any theoretical negative impact on local network throughput...
 
If you have an AC87, you should be using Adaptive QoS and not have to worry about the buffer sizes (changing the buffer sizes was proposed as a possible solution for traditional QoS where Adaptive wasn't available and bufferbloat was a concern).

But to answer your question, yes, if you reduce the buffer sizes at some point it would affect throughput.
 
Last edited:
If you have an AC87, you should be using Adaptive QoS and not have to worry about the buffer sizes (changing the buffer sizes was proposed as a possible solution for traditional QoS where Adaptive wasn't available and bufferbloat was a concern).

But to answer your question, yes, if you reduce the buffer sizes at some point it would affect throughput.
Thanks very much for that clear explanation. seems like I've performed some unnecessary tuning then :)
 
an Update I now get A on this test.

Seems adaptive QoS isnt needed.

All I did was enable standard QoS, set upstream to 90 of my max speed, and left downstream at 100%, nothing else done and now I get an A.

The results are pretty good, when I was downloading and using ssh at the same time,. before I had a latency hit, with even some dropped packets on ssh, even tho downstream is still set to 100%, I now have a very smooth cursor on ssh with no dropped packets, so seems simply enabling QoS helps the downstream, and likewise on uploading which usually is a much worse effect, is stays silky smooth.

681469.png


previous

555890.png
 
Last edited:
RT-AC66U No QOS Used
684152.png



Remote Desktop into friends connection
RT-AC56U No Qos Used
Same link speed same settings
Apart from He's PPPOE
Im DHCP
684202.png
 
Last edited:
Here is the link to the Asus web site:

http://www.asus.com/us/support/FAQ/1008718/

Hmmm... After reading all those pages, I am left with the impression that the Asus Adaptive QoS is some kind of secret sauce.

It appears to work well - virtually all the commenters on this forum and others I read are pleased.

But I was hoping to have a more clear description of what it does (and not how to set it up), so that others in the networking community could learn from it and make the world's networks work better. Any takers?
 
Basically, Trend Micro's closed source module will classify traffic based on signatures (those are the signature files it downloads I believe). Then, that classified traffic gets handled by the Linux Traffic Classifier - same way as traditional QoS does.

The key difference being in the traffic classification. DPI-based for Adaptive, user-entered rule-based (and therefore requiring CTF to be disabled) for Traditional.

So regarding buffer bloat, the key there is limiting your max upstream, it's not necessarily from Adaptive QoS itself.
 
What surprised me merlin is before I had a moderate latency hit when just downloading (only acks on upstream), that also improved, so I think enabling QoS makes less packets buffered as well when downloading.

The result doesnt show it but when watching dslreports live it shows the downspeed latency variation and upseed also.

Before downstream was going up to about 150ms from 20ms and upstream it rocketed to 500+ms, now downstream goes up to about 30-40 and upstream stays steady barely moving.
 
What surprised me merlin is before I had a moderate latency hit when just downloading (only acks on upstream), that also improved, so I think enabling QoS makes less packets buffered as well when downloading.

Might be related to ACK packets getting a higher priority perhaps.
 
a higher priority than what? :)

the upload was idle other than the downloading so its only sending acks.

for reference my upstream is 20mbit/sec, and ack's only need about 2.5mbitsec at full download speed (I got delayed acks disabled).

Your answer makes sense if I was uploading something at the same time.

Whatever it is, it works wonders.
 
Usually, it is an AQM that controls buffer length ("bufferbloat"), like RED, RIO, BLUE, or the more modern CoDel.

As dtaht (a CoDel dev :)) mentioned, it would be interesting to know what (if anything) is happening behind the scenes of the Trend Micro QoS.
 
For those running my fork (and who like to experiment :) ), I was able to improve my Bufferbloat score from a 'C' to an 'A' by....

(1) turning on tradition QoS and setting the up/down bandwidth to 90% of rated speed (rest of settings default)
(2) shrinking the send buffer size via an init-start script...
Code:
#!/bin/sh

# Adjust TCP send buffer (default is 120832 bytes)
echo 59392 > /proc/sys/net/core/wmem_default
echo 59392 > /proc/sys/net/core/wmem_max

The '59392' value was trial and error and seemed to give the best results for me (YMMV). I have a 50/5 Mbps service.

Also, turning on my VPN service turns things back to 'C' (I did set the sndbuf for the VPN accordingly), but the ping peaks are better than they were before the change.

My experience..
On a rtac68 the default and max on mine was 122880. Determined by using cat command with your 2 lines from the code. Using your code and dividing the 122880 by 2 I set it to 61440. Now I'm A+ for buffer bloat using my (wired) ps4 browser (shocked). Using Merlins fork with my '68 as gateway on fios (75/75) and QOS off. Immediately saw a tremendous difference for the better in games. Thanks for posting this-it was a tremendous help!
 
If your router is the throughput pinch-point (instead of the ISP), then buffer size should be unimportant for download traffic.

How much buffering is happening when a 10-100Mbit WAN interface transmits to a 100-1000Mbit LAN interface?
 
buffer bloat happens both ways but upstream is much more severe hence it gets most of the attention.
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top