What's new

[BETA] Asuswrt-Merlin 380.59 Beta 1 is now available

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Installed 380.59 git with QoS patches last night. Everything works fine on my RT-AC56U except one annoying thing, all the LEDs turned off except the power LED. After some tinkering I found that a simple "service restart_wireless" brought all the LEDs back to life so I put that command to the services_start user script to be executed in every boot. No other complaints for now, my usage case is rather simple after all with the sole exception of the QoS customizations I absolutely need (or the wife would kill me if her Skype work calls would hick up).

I did more tests last night with fq_codel, and so far I'm at a point where I can't see any reason to support it versus the current sfq implementation.
 
I did more tests last night with fq_codel, and so far I'm at a point where I can't see any reason to support it versus the current sfq implementation.
Doing some weird tests here, put it as the qdisc on the eth1 and eth2 on my RT_AC56U instead of pfifo_fast to see if it will affect anything!
 
MiniDLNA at page http://router.asus.com:8200/ is "Not Found"
ekkuc7.jpg

This was disabled by Asus in 2695 for security reasons, probably because it was wide open to everyone on the LAN.
 
I did more tests last night with fq_codel, and so far I'm at a point where I can't see any reason to support it versus the current sfq implementation.
Ok, so I decided to be serious and invested the last 2 hours for this (and I am not yet done). Using the DSL Reports speedtest, first of all I discovered that you can set numbers of streams to be used up to 32. But there's a catch! Unless you support IPv6, the number of streams are limited to 3! My ISP supports native IPv6 for years now, so I enabled that on the DSL reports test preferences and did numerous tests with my own HFSC enabled script, one version with SFQ and one with FQ_CODEL. The results favor FQ_CODEL which has less jitter and has better balance on the upload. It gives me A+ frequently while with SFQ I get A, and when rarely I get an A+ I got C in quality! My best result:



More tests to follow, I am changing to ASUS' HTB default QoS to see how it behaves.
 
Last edited:
And here are my ASUS QoS results: First is FQ_CODEL:


Second is the SFQ:


The FQ_CODEL version gave me consistently more A's while with SFQ most of the time I was getting all Bs.

Personally, I think it is worth keeping it in the code, but you're the boss.
 
my isp also dont support ipv6 and i not gona use tunnel for it either, as it will castrate my speeds all the way down to 10mbit if that.

I have noticed DSLREPORTS speed test have got better when i first start using them I was barely hitting 70mbit down, while every other test show 100+ down now it shows 100mbit consistently

Bufferbloat is fine so long as I keep QOS on if i disable it it goes all over the place
repost image with actual link

 
The FQ_CODEL version gave me consistently more A's while with SFQ most of the time I was getting all Bs.

Personally, I think it is worth keeping it in the code, but you're the boss.

Thanks for the extensive testing. Odd that in my particular case, I saw slightly better results with SFQ. I'm on a cable connection however, so typical latency is quite different from DSL. I also didn't test with a lot of simultaneous streams.

For me, I can get A regardles of whether I use SFQ or FQ_CODEL. However I do limit by down/upstream slightly below their maximum speed, as is recommended in most QoS guides (including OpenWRT's own guide on SQM).

Last night's tests mostly involved having two computers connected to the router. One had a high priority, the other one a low priority (within Asus's QoS rules). Then I ran speedtest.net on both computers at the same time, comparing results, while doing a constant stream of pings at www.google.ca (my ISP has a local proxy for it, so my typical ping to it averages 13-14 ms while the connection is at rest). Both schedulers did as well at spreading upstream data between the low and high priority computers (out of 10 Mbps one was steady at 1 Mbps, the other was as 9 Mbps). Downstream still didn't work properly (I would probably need to change Asus's QoS rules to use IFB to help with this, something that's a bit outside of my current expertise level). Ping results were inconclusive (too much variation in the collected data).

I also did some tests using Pingplotter so I'd get a visual report of the latency, once again the data was inconclusive.

As for my tests using OpenWRT's SQM rather than Asus's QoS, something was totally wrong with it (speeds were cut by 1/3 in both directions).

So far, the kernel changes don't seem to create any issues with the closed source components, so that would allow me to include it on master, and hide/expose the sched control knob based on the router model.
 
Makes sense that you get issues with a tunneled IPv6 solution, it adds overhead and requires CPU cycles to recover the IPv6 packets from inside the IPv4 ones. Certainly not something to use for speed and bufferbloat testing.
 
One interesting thing about Adaptive QoS versus Traditionnal: Trend Micro also implements queues on br0, not just on eth0.

Code:
admin@Stargate88:/tmp/home/root# tc qdisc show
qdisc pfifo_fast 0: dev fwd1 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc htb 1: dev eth0 root refcnt 2 r2q 10 default 0 direct_packets_stat 381371
qdisc htb 10: dev eth0 parent 1:10 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 1256: dev eth0 parent 10:256 limit 127p quantum 1514b divisor 1024
qdisc htb 11: dev eth0 parent 1:11 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 2256: dev eth0 parent 11:256 limit 127p quantum 1514b divisor 1024
qdisc htb 12: dev eth0 parent 1:12 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 3256: dev eth0 parent 12:256 limit 127p quantum 1514b divisor 1024
qdisc htb 13: dev eth0 parent 1:13 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 4256: dev eth0 parent 13:256 limit 127p quantum 1514b divisor 1024
qdisc htb 14: dev eth0 parent 1:14 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 5256: dev eth0 parent 14:256 limit 127p quantum 1514b divisor 1024
qdisc htb 15: dev eth0 parent 1:15 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 6256: dev eth0 parent 15:256 limit 127p quantum 1514b divisor 1024
qdisc htb 16: dev eth0 parent 1:16 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 7256: dev eth0 parent 16:256 limit 127p quantum 1514b divisor 1024
qdisc htb 17: dev eth0 parent 1:17 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 8256: dev eth0 parent 17:256 limit 127p quantum 1514b divisor 1024
qdisc pfifo_fast 0: dev eth1 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth2 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc htb 1: dev br0 root refcnt 2 r2q 10 default 2 direct_packets_stat 5
qdisc sfq 2: dev br0 parent 1:2 limit 127p quantum 1514b divisor 1024
qdisc htb 10: dev br0 parent 1:10 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 1256: dev br0 parent 10:256 limit 127p quantum 1514b divisor 1024
qdisc htb 11: dev br0 parent 1:11 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 2256: dev br0 parent 11:256 limit 127p quantum 1514b divisor 1024
qdisc htb 12: dev br0 parent 1:12 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 3256: dev br0 parent 12:256 limit 127p quantum 1514b divisor 1024
qdisc htb 13: dev br0 parent 1:13 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 4256: dev br0 parent 13:256 limit 127p quantum 1514b divisor 1024
qdisc htb 14: dev br0 parent 1:14 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 5256: dev br0 parent 14:256 limit 127p quantum 1514b divisor 1024
qdisc htb 15: dev br0 parent 1:15 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 6256: dev br0 parent 15:256 limit 127p quantum 1514b divisor 1024
qdisc htb 16: dev br0 parent 1:16 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 7256: dev br0 parent 16:256 limit 127p quantum 1514b divisor 1024
qdisc htb 17: dev br0 parent 1:17 r2q 10 default 256 direct_packets_stat 0
qdisc sfq 8256: dev br0 parent 17:256 limit 127p quantum 1514b divisor 1024
qdisc pfifo_fast 0: dev tun21 root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
 
Thanks for the extensive testing. Odd that in my particular case, I saw slightly better results with SFQ. I'm on a cable connection however, so typical latency is quite different from DSL. I also didn't test with a lot of simultaneous streams.

For me, I can get A regardles of whether I use SFQ or FQ_CODEL. However I do limit by down/upstream slightly below their maximum speed, as is recommended in most QoS guides (including OpenWRT's own guide on SQM).

Last night's tests mostly involved having two computers connected to the router. One had a high priority, the other one a low priority (within Asus's QoS rules). Then I ran speedtest.net on both computers at the same time, comparing results, while doing a constant stream of pings at www.google.ca (my ISP has a local proxy for it, so my typical ping to it averages 13-14 ms while the connection is at rest). Both schedulers did as well at spreading upstream data between the low and high priority computers (out of 10 Mbps one was steady at 1 Mbps, the other was as 9 Mbps). Downstream still didn't work properly (I would probably need to change Asus's QoS rules to use IFB to help with this, something that's a bit outside of my current expertise level). Ping results were inconclusive (too much variation in the collected data).

I also did some tests using Pingplotter so I'd get a visual report of the latency, once again the data was inconclusive.

As for my tests using OpenWRT's SQM rather than Asus's QoS, something was totally wrong with it (speeds were cut by 1/3 in both directions).

So far, the kernel changes don't seem to create any issues with the closed source components, so that would allow me to include it on master, and hide/expose the sched control knob based on the router model.
Ah, for the purposes of the test I used a much lower setting for the upload in the QoS, exactly to eliminate possible overheads and buffering. My upload is 5Mbits, I set the QoS to 4 just for testing. My actual connection is a VDSL2 with 35Mbps down 5Mbps up.
 
Ah, for the purposes of the test I used a much lower setting for the upload in the QoS, exactly to eliminate possible overheads and buffering. My upload is 5Mbits, I set the QoS to 4 just for testing. My actual connection is a VDSL2 with 35Mbps down 5Mbps up.

My ISP profile is 33/11 (yes, it's THAT weird - they bumped them by 10% a few months ago without telling anyone, and still advertise it as 30/10), so I configure mine to 31/10.
 
One interesting thing about Adaptive QoS versus Traditionnal: Trend Micro also implements queues on br0, not just on eth0.

Quite interesting, but not much use for us users of the normal Linux QoS. The trend micro QoS has the big advantage of CTF being able to work with it but the use of HTB is really antiquated plus it just fails to detect important things like VoIP programs. That's why I abandoned it in the first place, wife was yelling that her Skype calls were crap!
 
Gathering I need newer router to try this adaptive QOS? vs Traditional?

Correct. The Trend Micro engine powering it is only available on ARM-based models, with faster CPUs.
 
Quite interesting, but not much use for us users of the normal Linux QoS. The trend micro QoS has the big advantage of CTF being able to work with it but the use of HTB is really antiquated plus it just fails to detect important things like VoIP programs. That's why I abandoned it in the first place, wife was yelling that her Skype calls were crap!

Odd since even OpenWRT's SQM using fq_codel queues still uses HTB at the root.
 
Odd since even OpenWRT's SQM using fq_codel queues still uses HTB at the root.
I used to use HTB a long time ago in the Linux box I had as a router. The reason I switched to HFSC was that it performs better for VoIP applications, HTB reorders packets very frequently while HFSC doesn't! It's a difference you'll never see while just downloading data but makes a hell of a difference in VoIP apps that are very sensitive in the order that the packets arrive, out of order packets create jitter and that causes delays since the application has to buffer more.

The problem with HFSC though is that it is much more complicated to configure so you won't find it often in generic QoS solutions.
 
That part is Asus's own code. The tricky bit for me was to integrate their code on top of my highly customized DHCP code.

The same integration was also done to the DNSFilter page (it was the first one I did, as a proof-of-concept).
If you use the HTTPS interface, you get the following error:
client_function.js:260 Mixed Content: The page at 'https://router.asus.com:**/index.asp' was loaded over HTTPS, but requested an insecure script 'http://nw-dlcdnet.asus.com/plugin/js/ouiDB.js'. This request has been blocked; the content must be served over HTTPS.

Do you think there is an easy way to fix this? Sadly, the content is not available through https, but it's probably breaking the updateManufacturer part.

Edit: Yeah, it breaks the manufacturer recognition (it gets stuck at "loading manufaturer...").

Edit2: Maybe you could ship a local copy of the latest version with the firmware, and if it fails, it could use that.
 
Last edited:

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top