What's new

iperf tests between Asus RT-AC68U and 1Gbps capable client ~615Mbps

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

cuekay

Occasional Visitor
Hello. I've recently been having severe speed issues within the LAN as well as with the WAN and ran some iperf tests to identify bottlenecks. My current setup includes 3 desktops connected to three ports on the Asus RT-AC68U and a Zyxel 1Gbps capable switch connected to the fourth port, which in turn connects 3 laptops, all with 1 virtual machine each. While that's the standard configuration, I've tried to repatch the cables between the devices to run three major series of tests:

1) iperf tests between devices on the switch alone
2) iperf tests between devices on the Asus RT-AC68U alone
3) iperf tests between devices on both the switch and Asus RT-AC68U
4) iperf tests between devices and the Asus RT-AC68U itself

I've noticed that while the bandwidth results yield anywhere from between 800-1000Mbps on iperf tests in the first series(switch only), the results for the second, third, and fourth series hover around 615 Mbps. I was wondering what I can do with regards to configuration to get the bandwidth to break 800 Mbps or if the discrepancy in speed can be chalked up to all the other stuff that the router is tasked with.

Per the webui, the CPU usage on the router averages around 3% with spikes at 6% when I view it anyway(haven't logged it), while RAM remains steady at around 68MB or 27% of use. Beside the router running a web, dhcp, wins and dns server and the processes baked into the router, I enabled the OpenVPN server. and have optware installed for fringe cases such as getting iperf, and a usb3 flash drive is plugged in for optware.

A top snapshot that I think is fairly representative of what I see on average is below:

Code:
Mem: 69124K used, 186592K free, 0K shrd, 584K buff, 11436K cached
CPU:  0.0% usr  4.5% sys  0.0% nic 90.9% idle  0.0% io  0.0% irq  4.5% sirq
Load average: 0.00 0.05 0.06 1/94 30608
  PID  PPID USER  STAT  VSZ %VSZ CPU %CPU COMMAND
  442  1 admin S  7604  2.9  0  4.5 httpd
30087 30086 admin S  18424  7.2  1  0.0 /opt/sbin/smbd -D
30086  1 admin S  18368  7.1  1  0.0 /opt/sbin/smbd -D
  1  0 admin S  6360  2.4  0  0.0 /sbin/preinit
  448  1 admin S  6228  2.4  1  0.0 watchdog
 1239  1 admin S  6224  2.4  0  0.0 bwdpi_wred_alive
  449  1 admin S  6224  2.4  0  0.0 sw_devled
  421  1 admin S  6224  2.4  1  0.0 /sbin/wanduck
13312  1 admin S  6224  2.4  1  0.0 ntp
13421  1 admin S  6224  2.4  1  0.0 disk_monitor
 1442  448 admin S  6224  2.4  0  0.0 hour_monitor
  459  448 admin S  6224  2.4  0  0.0 ots
 4137  1 admin S  6224  2.4  0  0.0 usbled
  429  1 admin S  6224  2.4  0  0.0 wpsaide
  457  1 admin S  6224  2.4  1  0.0 bwdpi_check
  340  1 admin S  6216  2.4  0  0.0 console
30036  1 admin S  4400  1.7  0  0.0 /opt/bin/ntpd -c /opt/etc/ntp/ntp.
13598  1 admin S  4032  1.5  1  0.0 /etc/openvpn/vpnserver1 --cd /etc/
 1324  1319 admin S  3684  1.4  1  0.0 data_colld -i 1800 -p 43200 -b -w
 1368  1319 admin S  3684  1.4  0  0.0 data_colld -i 1800 -p 43200 -b -w

Anyway, any tip on tweaking the config or any kick in the rear for reporting this first-world-problem as an "issue" is much appreciated. But I'm pretty sure that I'm asking the right folks in that I'm assuming that we're all looking to get as much mileage -- be it in the form of stability, speed, or features -- from our Merlin-powered routers. Thanks much.
 
What firmware are you running?

Have you ever performed a full reset to factory defaults and then minimally and manually (do not use a backup config file) configure the router to secure it and connect to your ISP?

http://www.snbforums.com/threads/no...l-and-manual-configuration.27115/#post-205573

You may also be interested in using john9527's NVRAM Save/Restore utility too if you have a lot of time invested in configuring your router.

http://www.snbforums.com/threads/user-nvram-save-restore-utility-r24.19521/

If nothing else, it will give you a text based (human readable) backup of all your settings.

Is there anything in the VM's settings that might affect their performance? Are the VM's running with enough ram and are they stored on an ssd or a hdd?

Is toggling the USB 3.o interference setting any help? Is removing the USB drive completely (at least for testing) helping with increased performance?

With anything you try above, always reboot the router and the two computers you're using before you test for throughput again.
 
I've noticed that while the bandwidth results yield anywhere from between 800-1000Mbps on iperf tests in the first series(switch only), the results for the second, third, and fourth series hover around 615 Mbps. I was wondering what I can do with regards to configuration to get the bandwidth to break 800 Mbps or if the discrepancy in speed can be chalked up to all the other stuff that the router is tasked with

That sounds about right... I'm not seeing a problem here...

On GigE with iperf - should see around 940/960 Mbps over TCP, and similar with UDP discounting missed packets...

Not to beat the Memory dead-horse, but this is a good example of consumer gear... skinny pipes in to/out of the kernel, and low speed at that.

Anything less is a performance issue with the box in the middle...
 
Example - source is Mac, sink is Linux, going across a Netgear GS-108T switch... when I was testing my RCE-V2440 across the LAN/WAN interfaces, the numbers were the same...

(nice to have a little gen purpose linux box on the LAN for stuff like this..)

Code:
$ iperf3 -c 192.168.1.20 -i1 -Z
Connecting to host 192.168.1.20, port 5201
[  4] local 192.168.1.100 port 64587 connected to 192.168.1.20 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   113 MBytes   947 Mbits/sec                  
[  4]   1.00-2.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   2.00-3.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   3.00-4.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   4.00-5.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   5.00-6.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   6.00-7.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   7.00-8.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   8.00-9.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   9.00-10.00  sec   112 MBytes   940 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.10 GBytes   941 Mbits/sec                  sender
[  4]   0.00-10.00  sec  1.10 GBytes   941 Mbits/sec                  receiver

iperf Done.

$ iperf3 -c 192.168.1.20 -i1 -Z -R
Connecting to host 192.168.1.20, port 5201
Reverse mode, remote host 192.168.1.20 is sending
[  4] local 192.168.1.100 port 64590 connected to 192.168.1.20 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   1.00-2.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   2.00-3.00   sec   112 MBytes   942 Mbits/sec                  
[  4]   3.00-4.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   4.00-5.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   5.00-6.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   6.00-7.00   sec   112 MBytes   942 Mbits/sec                  
[  4]   7.00-8.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   8.00-9.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   9.00-10.00  sec   112 MBytes   942 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.10 GBytes   942 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.10 GBytes   941 Mbits/sec                  receiver

iperf Done.
 
That sounds about right... I'm not seeing a problem here...

On GigE with iperf - should see around 940/960 Mbps over TCP, and similar with UDP discounting missed packets...

Not to beat the Memory dead-horse, but this is a good example of consumer gear... skinny pipes in to/out of the kernel, and low speed at that.

Anything less is a performance issue with the box in the middle...

sfx2000, even with this example which clearly shows a lot of 'free ram'?

Is faster/better ram modules required (dual channel would be better, of course) or will just a simple capacity increase be enough (I'm guessing not though), if it was even possible, of course?
 
sfx2000, even with this example which clearly shows a lot of 'free ram'?

Is faster/better ram modules required (dual channel would be better, of course) or will just a simple capacity increase be enough (I'm guessing not though), if it was even possible, of course?

Faster/Wider memory paths help - the Broadcom solutions are all about the same - which makes sense as HW wise, they're all pretty much the same - and SW can't help much there... Cut/Paste on the HDK/SDK that Broadcom delivers...

The one AC1900 class device that is kind is surprising is the WRT1900acV1 under OpenWRT, it's pretty close to what I see with the intel based box in my testing - but it's a spendy beast, and a lot more discrete stuff happening there. Which makes the WRT that much more frustrating, as the HW is quite good, the SW usability/functionality is the weak spot in that device.

I think the uTik/ERL's might be pretty close if not the same...

I haven't had a chance to put a QCA solution under the gun, at least not in a consumer solution that is available at the moment.. but I would expect it to land somewhere between Broadcom's solution and the others...
 
Faster/Wider memory paths help - the Broadcom solutions are all about the same - which makes sense as HW wise, they're all pretty much the same - and SW can't help much there... Cut/Paste on the HDK/SDK that Broadcom delivers...

Thanks sfx2000. ;)

So it seems like the limiting factor in how important ram is in current arm based routers is the processor itself? And any 'tuning' and limitations imposed on the Linux kernel that runs on it?

This is why I want an Intel based processor. Even a Celeron (latest versions) is better in both higher performance and equal if not better efficiency than anything Arm can deliver.

The other options are not for me (but particularly OpenWRT and Linksys hardware).
 
615 MB/s is close to what I would expect performance-wise in WAN to LAN throughput if you have the DPI engine enabled (that means Adaptive QoS, or some of the AiProtection options).
 
615 MB/s is close to what I would expect performance-wise in WAN to LAN throughput if you have the DPI engine enabled (that means Adaptive QoS, or some of the AiProtection options).

I didn't read the testing like that.

I think the OP is testing the LAN to LAN throughput through the router's LAN ports.
 
I didn't read the testing like that.

I think the OP is testing the LAN to LAN throughput through the router's LAN ports.

LAN traffic is switched, so the only potential bottlenecks are the clients and the cables themselves.
 
LAN traffic is switched, so the only potential bottlenecks are the clients and the cables themselves.

Agree - going across a switch should be minimal overhead... they're pretty fast internally...

The caveat is the WAN/LAN interface, and how that's implemented - Broadcom has their CTF, which offloads things into the on package switch, but this can break other things that might want to look into those packet flows - and then it's back to the kernel and bridge interfaces - and CPU performance, along with memory architecture can have some impact...
 
LAN traffic is switched, so the only potential bottlenecks are the clients and the cables themselves.

It doesn't seem to be the clients or the cables in this case. When using the RT-AC68U router as a switch for the identical clients and cables, the throughput is about 2/3's (615 Mbps). When the Zyxel switch is used with those same cables and clients, the throughput is normal (800-1000Mbps).
 
It doesn't seem to be the clients or the cables in this case. When using the RT-AC68U router as a switch for the identical clients and cables, the throughput is about 2/3's (615 Mbps). When the Zyxel switch is used with those same cables and clients, the throughput is normal (800-1000Mbps).

I never had any problem reaching 800-900 Mbps of LAN throughput with any of my routers. My NAS can push over 100 MB/s right now over SMB. And there are people who even get 940 Mbps of NAT throughput. The switch is definitely more than able to do so.
 
Re-reading the OP's post, it seems that his problem lies in the fact that he runs iperf on the router for tests 2,3 and 4. In this case, the bottleneck is the CPU isn't able to run iperf fast enough to generate a whole 1 Gbps of traffic. His first test where he only switches LAN traffic gives him the expected ~1 Gbps throughput.
 
I never had any problem reaching 800-900 Mbps of LAN throughput with any of my routers. My NAS can push over 100 MB/s right now over SMB. And there are people who even get 940 Mbps of NAT throughput. The switch is definitely more than able to do so.

What else could it be then?
 
2) iperf tests between devices on the Asus RT-AC68U alone
3) iperf tests between devices on both the switch and Asus RT-AC68U
4) iperf tests between devices and the Asus RT-AC68U itself

@L&LD - you raise a good point - why is the RT-AC68U slow across LAN ports?

I could understand LAN to WAN, but across the LAN ports, we should see around 940 to 960 Mbps, and he's reporting 615... and across the LAN ports, one wouldn't think that Asus' DPI engine or other stuff would come into play..

hmmm....
 
I think we need OP to clarify things - is he running iperf on the router itself, or across two end-points...
 
Hi all. Thanks for the feedback as I've been scratching my head on quite a number of things lately. To clarify, I ran the following tests:

TL;DR: the WAN port was not used in the tests and only the LAN ports of the router. And while the first three series were tests between two of the six computers in the LAN, the last/fourth series were tests between one of the computers and the Asus router itself.



series 1:

switch ports 1 &2
desktop1(iperf -c desktop2) <====Zyxel switch====> desktop2(iperf -s)
desktop1(iperf -s) <====Zyxel switch=====>desktop2(iperf -c desktop1)

series 2:
Asus LAN ports 1,2,3,4 in permutations(ports 1,2; ports 2,3; ports 3,4; etc) (no WAN ports used)
desktop1(iperf -c desktop2) <====Asus====>desktop2(iperf -s)
desktop1(iperf -s)<====Asus====>desktop2(iperf -c desktop1)

series 3:

using four of the eight ports on Zyxel switch and all four ports of Asus router
desktop1(iperf -c desktop2) <====Zyxel switch===Asus====>desktop2(iperf -s)
desktop1(iperf -s)<====Zyxel switch===Asus===>desktop2(iperf -c desktop1)

series 4:
desktop1(iperf -c 192.168.1.1) <====> Asus(iperf -s)
Asus(iperf -c desktop1)<====>desktop1(iperf -s)

While I'm sure there's more elegant way to put this together, as I didn't really do a good job in condensing my tests, I sort of unpacked it here for clarity. I can definitely unpack it a bit more but thought brevity is still wanted here. I'm still catching up so I'll be back with more if needed.
 
@L&LD - you raise a good point - why is the RT-AC68U slow across LAN ports?

I could understand LAN to WAN, but across the LAN ports, we should see around 940 to 960 Mbps, and he's reporting 615... and across the LAN ports, one wouldn't think that Asus' DPI engine or other stuff would come into play..

hmmm....

Broadcom's SoC does have its own internal QoS (they call it iQoS I think). Maybe the Trend Micro engine configures the BCM5301x (the switch) to enable QoS. Hard to tell with all those black boxes involved.

There's a few other low-level parameters that are configurable on the BCM53010 that can potentially interfere. I remember a few years ago Asus tested one of these settings which increased throughput, but at the expense of increased latency. The change was reverted later.
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top