What's new

[Asus RT-AX88U] Experiences & Discussion

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Which is expected, the BCM4908 uses the same 1.8 GHz B53 cores as the BCM4906.

160 MHz stability issues seem to be a known problem at the moment, a few persons reported having them. I wonder how much of it is due to the fact that 160 MHz is a pretty wide band that is highly likely to run into interference issues (just like trying to use 40 MHz on the 2.4 GHz band).

From the limited info out there, DFS implementation seems like a big part of the issue. If any radar is detected there is a radio shutdown for 60s while it scans another channel, potentially running into the same issue multiple times in a row.

Besides that though the unit seems pretty solid, 1 day of uptime so far.
 
Unless you are certain you don't have any frequent radar signals in your vicinity I would try to avoid using DFS-channels altogether. Where I live there are frequent radar signals going on every day due to weather stations and satellites in the area so it will knock my access points off the DFS-channels all the time so I have just tried to avoid using them.
 
From the limited info out there, DFS implementation seems like a big part of the issue. If any radar is detected there is a radio shutdown for 60s while it scans another channel, potentially running into the same issue multiple times in a row.

DFs is probably a big part of the problem as well since 160 MHz usage tend to require using some of the DFS channel space.

160 MHz is another one of these "nice on paper, unpractical in real-life usage" ideas I suspect.

Besides that though the unit seems pretty solid, 1 day of uptime so far.

Same here, after having switched to it as my main for a week now. Aside from a lot of random radio errors two days ago (which were resolved by a simple power cycle), and sometimes AiProtection's Bandwidth page missing 2/3 of my clients until I hit the networkmap page to force a refresh, it's been working really well here.
 
I may have missed it, but I don’t see any reports raving about improved Wi-Fi performance...
 
I may have missed it, but I don’t see any reports raving about improved Wi-Fi performance...

Until we get matching clients, I don't expect any notable performance change on the wifi side.
 
Is that what you are seeing? How about others?

My apartment is too small for me to notice any real wifi change. My previous RT-AC88U had no problem reaching the furthest room and still give me 1-2 bars on my tablet. My laptop is almost always used in the same room as the router, so link rate is always in the 7xx-866 range.
 
Runner is the new branding for Flow Accelerator / Level 2 acceleration which has been described like this by Asus;

Level 2: Level 1 (CTF) + FA (Flow Accelerator): Hardware NAT acceleration mechanism design for accelerating wired DHCP and Static IP connections.
You will need Flow Accelerator option to fully take advantage of internet provider’s Gigabit service is offered.

Flow Accelerator is achieved through special hardware related designs.

EDIT:

Both CTF and FA / Runner acceleration is on by default. But Level 2 (FA/Runner) will be disabled when you activate certain functions like Ai Protection or QoS as those require additional inspection of the NAT traffic in order to function so the acceleration can't be used.

It is my understanding that Packet Runner is what we know as CTF (Cut Through Forwarding) and Flow Cache what we know as FA (Flow Accelerator). I would think that the AX88U would be able to hit near gigabit speeds with just Packet Runner enabled as I was able to get 940 Mbps WAN<>LAN speeds with just CTF enabled on my old 68U albeit pegging core 1 at 100%.
 
Might be, I haven't really tested or used QoS or anything that would disable FA in the past so I have no references. But I'm for sure not able to achieve a full 500/500 mbit on the RT-AX88U when enabling adaptive QoS.
 
EDIT;

So right after I spend an hour testing and making this post, it decides to work o_O

Code:
PS C:\Users\Adamm\Downloads\iperf-2.0.9-win64> ./iperf -u -c  192.168.1.1 -b 1000M
------------------------------------------------------------
Client connecting to 192.168.1.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11.76 us (kalman adjust)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.101 port 56596 connected with 192.168.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   985 MBytes   827 Mbits/sec
[  3] Sent 702855 datagrams

Code:
Running warmup...
Running a 400MB file write on \\RT-AX88U-DC28\test (at SSD) 5 times...
Iteration 1:     43.65 MB/sec
Iteration 2:     42.99 MB/sec
Iteration 3:     40.50 MB/sec
Iteration 4:     50.63 MB/sec
Iteration 5:     45.19 MB/sec
-----------------------------
Average (W):     44.59 MB/sec
-----------------------------
Running a 400MB file read on \\RT-AX88U-DC28\test (at SSD) 5 times...
Iteration 1:    118.83 MB/sec
Iteration 2:    127.29 MB/sec
Iteration 3:    128.41 MB/sec
Iteration 4:    127.62 MB/sec
Iteration 5:    127.47 MB/sec
-----------------------------
Average (R):    125.92 MB/sec
-----------------------------

When it does work, you end up getting 10MB/s faster reads then Ethernet, and 50MB/s faster reads over 80MHz channels via SMB. After this it looks like we run into a CPU bottleneck.

For comparison, I also ran iperf from my AC86U media bridge with a 1733Mbps link speed which uses 4x streams rather then 160MHz channels.

Code:
skynet@RT-AC86U-2EE8:/tmp/home/root# iperf -u -c  192.168.1.1 -b 1000M
------------------------------------------------------------
Client connecting to 192.168.1.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11.22 us (kalman adjust)
UDP buffer size:  512 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.100 port 58257 connected with 192.168.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   806 MBytes   675 Mbits/sec
[  3] Sent 574901 datagrams
[  3] Server Report:
[  3]  0.0-10.3 sec   561 KBytes   448 Kbits/sec  16.413 ms 392841/    0 (inf%)

Surprisingly there isn't much performance gain over a 2 stream 80MHz client and its slower then our 160MHz channel client running at a similar link speed o_O


EDIT 2;

So not being content, I kept digging. Our 4 stream media bridge client's results were limited by our 1 UDP connection, when we bump this to 5 connections we see the results we initially anticipated.

Code:
skynet@RT-AC86U-2EE8:/tmp/home/root# iperf -u -c  192.168.1.1 -b 1000M -P 5
------------------------------------------------------------
Client connecting to 192.168.1.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11.22 us (kalman adjust)
UDP buffer size:  512 KByte (default)
------------------------------------------------------------
[  5] local 192.168.1.100 port 51698 connected with 192.168.1.1 port 5001
[  4] local 192.168.1.100 port 41208 connected with 192.168.1.1 port 5001
[  9] local 192.168.1.100 port 54085 connected with 192.168.1.1 port 5001
[  3] local 192.168.1.100 port 34501 connected with 192.168.1.1 port 5001
[  7] local 192.168.1.100 port 37033 connected with 192.168.1.1 port 5001
read failed: Connection refused
[  4] WARNING: did not receive ack of last datagram after 1 tries.
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   305 MBytes   255 Mbits/sec
[  4] Sent 217683 datagrams
read failed: Connection refused
[  9] WARNING: did not receive ack of last datagram after 5 tries.
[  9]  0.0-10.0 sec   322 MBytes   270 Mbits/sec
[  9] Sent 229954 datagrams
read failed: Connection refused
[  3] WARNING: did not receive ack of last datagram after 9 tries.
[  3]  0.0-10.0 sec   306 MBytes   257 Mbits/sec
[  3] Sent 218605 datagrams
[  7] WARNING: did not receive ack of last datagram after 10 tries.
[  7]  0.0-10.0 sec   309 MBytes   259 Mbits/sec
[  7] Sent 220680 datagrams
[  5] WARNING: did not receive ack of last datagram after 10 tries.
[  5]  0.0-10.0 sec   303 MBytes   254 Mbits/sec
[  5] Sent 216260 datagrams
[SUM]  0.0-10.0 sec  1.51 GBytes  1.29 Gbits/sec
[SUM] Sent 1103182 datagrams

1.29 Gbits/sec, nice! Lets see if we can improve our 160MHz client results. I moved my laptop from my usual testing place to a table directly in-front of the router with 5 UDP streams.

Code:
PS C:\Users\Adamm\Downloads\iperf-2.0.9-win64> ./iperf -u -c  192.168.1.1 -b 1000M -P 5
------------------------------------------------------------
Client connecting to 192.168.1.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11.76 us (kalman adjust)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  7] local 192.168.1.101 port 62536 connected with 192.168.1.1 port 5001
[  4] local 192.168.1.101 port 62533 connected with 192.168.1.1 port 5001
[  5] local 192.168.1.101 port 62534 connected with 192.168.1.1 port 5001
[  6] local 192.168.1.101 port 62535 connected with 192.168.1.1 port 5001
[  3] local 192.168.1.101 port 62532 connected with 192.168.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  7]  0.0-10.0 sec   290 MBytes   243 Mbits/sec
[  7] Sent 206912 datagrams
[  4]  0.0-10.0 sec   298 MBytes   250 Mbits/sec
[  4] Sent 212571 datagrams
[  5]  0.0-10.0 sec   295 MBytes   247 Mbits/sec
[  5] Sent 210226 datagrams
[  6]  0.0-10.0 sec   296 MBytes   248 Mbits/sec
[  6] Sent 211243 datagrams
[  3]  0.0-10.0 sec   298 MBytes   250 Mbits/sec
[  3] Sent 212518 datagrams
[SUM]  0.0-10.0 sec  1.44 GBytes  1.24 Gbits/sec
[SUM] Sent 1053470 datagrams

1.24 Gbits/sec. Now those are some exciting wireless speeds for a 2 stream AC client.

It would be great to see some more scientific testing *hint* @thiggins :p

EDIT 3;

It also seems the speeds we received during our tests is also the total possible throughput to all clients. I tried to run an iperf test on both my testing clients at the same time and received almost identical numbers when combining the results (1.228 Gbits/sec)

Code:
skynet@RT-AC86U-2EE8:/tmp/home/root# iperf -u -c  192.168.1.1 -b 1000M -P 5
------------------------------------------------------------
Client connecting to 192.168.1.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11.22 us (kalman adjust)
UDP buffer size:  512 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.100 port 42366 connected with 192.168.1.1 port 5001
[  3] local 192.168.1.100 port 52441 connected with 192.168.1.1 port 5001
[  5] local 192.168.1.100 port 46021 connected with 192.168.1.1 port 5001
[  8] local 192.168.1.100 port 60574 connected with 192.168.1.1 port 5001
[  7] local 192.168.1.100 port 46195 connected with 192.168.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  7]  0.0-10.0 sec   222 MBytes   185 Mbits/sec
[  7] Sent 158358 datagrams
[  7] Server Report:
[  7]  0.0-10.0 sec   155 KBytes   126 Kbits/sec   1.356 ms 133963/    0 (inf%)
[  4]  0.0-10.0 sec   228 MBytes   191 Mbits/sec
[  4] Sent 162731 datagrams
[  5]  0.0-10.0 sec   221 MBytes   185 Mbits/sec
[  5] Sent 157681 datagrams
[  4] Server Report:
[  4]  0.0-10.5 sec   159 KBytes   123 Kbits/sec  31.910 ms 137304/    0 (inf%)
[  4] 0.00-10.55 sec  1 datagrams received out-of-order
[  5] Server Report:
[  5]  0.0-10.5 sec   154 KBytes   120 Kbits/sec  31.975 ms 131907/    0 (inf%)
[  5] 0.00-10.55 sec  1 datagrams received out-of-order
[  3]  0.0-10.0 sec   226 MBytes   189 Mbits/sec
[  3] Sent 160913 datagrams
[  8]  0.0-10.0 sec   215 MBytes   180 Mbits/sec
[  8] Sent 153526 datagrams
[SUM]  0.0-10.0 sec  1.09 GBytes   929 Mbits/sec
[SUM] Sent 793209 datagrams
[  8] Server Report:
[  8]  0.0-10.6 sec   150 KBytes   116 Kbits/sec  32.739 ms 128279/    0 (inf%)
[  3] Server Report:
[  3]  0.0-10.6 sec   157 KBytes   122 Kbits/sec  33.064 ms 135229/    0 (inf%)




PS C:\Users\Adamm\Downloads\iperf-2.0.9-win64> ./iperf -u -c  192.168.1.1 -b 1000M -P 5
------------------------------------------------------------
Client connecting to 192.168.1.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11.76 us (kalman adjust)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  7] local 192.168.1.101 port 61293 connected with 192.168.1.1 port 5001
[  4] local 192.168.1.101 port 61290 connected with 192.168.1.1 port 5001
[  5] local 192.168.1.101 port 61291 connected with 192.168.1.1 port 5001
[  3] local 192.168.1.101 port 61289 connected with 192.168.1.1 port 5001
[  6] local 192.168.1.101 port 61292 connected with 192.168.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  7]  0.0-10.0 sec  71.1 MBytes  59.6 Mbits/sec
[  7] Sent 50709 datagrams
[  4]  0.0-10.0 sec  71.6 MBytes  60.1 Mbits/sec
[  4] Sent 51106 datagrams
[  5]  0.0-10.0 sec  71.2 MBytes  59.8 Mbits/sec
[  5] Sent 50811 datagrams
[  3]  0.0-10.0 sec  71.7 MBytes  60.1 Mbits/sec
[  3] Sent 51124 datagrams
[  6]  0.0-10.0 sec  70.9 MBytes  59.5 Mbits/sec
[  6] Sent 50560 datagrams
[SUM]  0.0-10.0 sec   357 MBytes   299 Mbits/sec
[SUM] Sent 254310 datagrams

So looks like we have pushed AC about as far as it will go. I'd be interested to see the same test run with two of these devices so we can see if the AX hype translates into real world usage.
 
Last edited:
With a bit of SMB tuning I was able to improve results across the board. I came across this notice in the Samba wiki;

Settings That Should Not Be Set
40px-Ambox_important.svg.png
The Samba team highly-recommends not setting the parameters described in this section without understanding the technical background and knowing the consequences. In most environments, setting these parameters or changing the defaults decreases the Samba network performance.




The socket options Parameter
Modern UNIX operating systems are tuned for high network performance by default. For example, Linux has an auto-tuning mechanism for buffer sizes. When you set the socket options parameter in the smb.conf file, you are overriding these settings. In most cases, setting this parameter decreases the performance.

To unset the parameter, remove the socket options entry from the [global] section of your smb.conf file.

It seems much like how we previously discovered that removing send and receive socket buffer limits increases performance, so does getting rid of the rest of the socket options.

LAN After;

Code:
Running a 400MB file write on \\RT-AX88U-DC28\test (at SSD) 5 times...
Iteration 1:     72.74 MB/sec
Iteration 2:     81.08 MB/sec
Iteration 3:     79.82 MB/sec
Iteration 4:     74.09 MB/sec
Iteration 5:     70.84 MB/sec
-----------------------------
Average (W):     75.71 MB/sec
-----------------------------
Running a 400MB file read on \\RT-AX88U-DC28\test (at SSD) 5 times...
Iteration 1:    117.46 MB/sec
Iteration 2:    118.00 MB/sec
Iteration 3:    117.83 MB/sec
Iteration 4:    118.55 MB/sec
Iteration 5:    116.92 MB/sec
-----------------------------
Average (R):    117.75 MB/sec
-----------------------------

That's almost a 50% write speed increase immediately after removing all socket options.

Wireless results were more modest due to CPU bottlenecks but it improved stability of the transfer speed.

Code:
Running a 400MB file write on \\RT-AX88U-DC28\test (at SSD) 5 times...
Iteration 1:     46.08 MB/sec
Iteration 2:     48.56 MB/sec
Iteration 3:     46.44 MB/sec
Iteration 4:     48.36 MB/sec
Iteration 5:     46.16 MB/sec
-----------------------------
Average (W):     47.12 MB/sec
-----------------------------
Running a 400MB file read on \\RT-AX88U-DC28\test (at SSD) 5 times...
Iteration 1:    127.39 MB/sec
Iteration 2:    128.59 MB/sec
Iteration 3:    128.19 MB/sec
Iteration 4:    127.42 MB/sec
Iteration 5:    125.43 MB/sec
-----------------------------
Average (R):    127.40 MB/sec
-----------------------------

@RMerlin this is definitely worth looking into once we can confirm removing the remaining options don't have any adverse affect.


For anyone wishing to replicate this, create the file "/jffs/scripts/smb.postconf" with the following content.

Code:
#!/bin/sh
CONFIG="$1"
sed -i '\~socket options~d' "$CONFIG"

After creating the file, chmod it and restart the nasapps service;

Code:
chmod +x /jffs/scripts/smb.postconf

service restart_nasapps
 
@RMerlin this is definitely worth looking into once we can confirm removing the remaining options don't have any adverse affect.

The problem is results vary between users and environment (and also between models and architecture). I've spent hours testing various combinations in the past, and the best results for my test scenarios were what I was currently using. At the time I was getting 105 MB/s in my own test environment using the settings I use in my firmware.

I don't know what Samba considers "modern". Does kernel 2.6.36 qualifies as modern enough to rely strictly on the kernel's auto-tuning?

Also keep in mind that the stock firmware uses Samba 3.0.xx, while my firmware uses 3.6.25, so the results will vary between stock firmware and mine.
 
The problem is results vary between users and environment (and also between models and architecture). I've spent hours testing various combinations in the past, and the best results for my test scenarios were what I was currently using. At the time I was getting 105 MB/s in my own test environment using the settings I use in my firmware.

I don't know what Samba considers "modern". Does kernel 2.6.36 qualifies as modern enough to rely strictly on the kernel's auto-tuning?

Also keep in mind that the stock firmware uses Samba 3.0.xx, while my firmware uses 3.6.25, so the results will vary between stock firmware and mine.

Agree'd, making sure it works reliably for all users no matter the platform/device is the most important part. With that being said it does give a performance boost when the above results can be replicated. Now to see if other devices give the same experience.
 
Agree'd, making sure it works reliably for all users no matter the platform/device is the most important part. With that being said it does give a performance boost when the above results can be replicated. Now to see if other devices give the same experience.

If it can be confirmed that it improves performance for everyone for this specific model (and most likely the RT-AC86U as well), then I could change the settings specifically for the HND platform.
 
I don't know what Samba considers "modern". Does kernel 2.6.36 qualifies as modern enough to rely strictly on the kernel's auto-tuning?
Yes, Linux 2.6.7 and later.
With that being said it does give a performance boost when the above results can be replicated. Now to see if other devices give the same experience.
When using NAS performance tester on other devices it is better to run the tests with the 100 MB file option, to avoid creating memory pressure that could influence the results.
 
I've still not had any noticeable issues after 13 days 8 hours 32 minute(s) 31 seconds during which the router has received 337 GB and transmitted 13.58 GB of external data.
 
I've still not had any noticeable issues after 13 days 8 hours 32 minute(s) 31 seconds during which the router has received 337 GB and transmitted 13.58 GB of external data.

Are you using 80 or 160MHz channels? I noticed 160MHz was very flaky until I changed the wireless settings to force 160MHz on channel 36 (Region set to Asia), not once crash since.
 
I've not even got 160MHz enabled as I've nothing that would support it, although 20/40/80 MHz option is enabled I don't believe anything is on 80MHz either.
 
Agree'd, making sure it works reliably for all users no matter the platform/device is the most important part. With that being said it does give a performance boost when the above results can be replicated. Now to see if other devices give the same experience.

My Asustor NAS uses this:

Code:
socket options = IPTOS_LOWDELAY TCP_NODELAY

Anyone can see what Synology or QNAP uses on their end? If in doubt, I'd be tempted to simply follow what a NAS uses, since I expect these to be as optimized as possible for the majority of users.

SO_KEEPALIVE is probably something Asustor need to add someday. I see dozens of "zombie" connections on my NAS's Samba at all times...
 

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top