What's new

Anyone have 802.3ad link aggregation working on a GT-AC5300?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

ratbuddy

Occasional Visitor
It's just not working for me. I (think I) have it set up properly on my Proxmox server, which is running an Intel i350T2V2 with the ports connected to ports 5 and 6 on the router. SSHing into the router, I see nothing when I do ratbuddy@GT-AC5300:/sys/devices/virtual/net/bond0/bonding# cat slaves

I did enable aggregation through the web interface, but it doesn't appear to have done anything other than create a bond0 with no slaves. On the Proxmox side, the bond is there, but not working due to differing aggregation IDs for each port.

Do I need to fix this manually in the router, or have I overlooked something else?

Thanks for any advice!

edit: Oh, in the router network map/ethernet ports, it shows 5 and 6 as 1gbps each, nothing mentioned about aggregation.
 
Last edited:
  • Like
Reactions: eJC
Still no luck. I get this in the logs on the client machine: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond

Asus responded to my ticket with a link to the 88U link aggregation instructions, and has ignored my responses. Awesome.
 
It's just not working for me. I (think I) have it set up properly on my Proxmox server, which is running an Intel i350T2V2 with the ports connected to ports 5 and 6 on the router. SSHing into the router, I see nothing when I do ratbuddy@GT-AC5300:/sys/devices/virtual/net/bond0/bonding# cat slaves

I did enable aggregation through the web interface, but it doesn't appear to have done anything other than create a bond0 with no slaves. On the Proxmox side, the bond is there, but not working due to differing aggregation IDs for each port.

Do I need to fix this manually in the router, or have I overlooked something else?

Thanks for any advice!

edit: Oh, in the router network map/ethernet ports, it shows 5 and 6 as 1gbps each, nothing mentioned about aggregation.

I just set up my NAS for "port trunking" (I wish everyone used the same term for 802.3ad). I think everything is running well, except I don't think the router is handling file transfer cancel requests properly (every time I cancel a transfer you'll see the error). All I had to do was make the configuration changes on my QNAP TS-470 PRO. I didn't have to make any changes to the GT-AC5300 other than making sure I was using Ethernet ports 5/6 (which doesn't appear to make to "ext switch ports" properly).

Here are the logs. I'll do more testing tonight and post results.

=====
May 14 11:14:25 kernel: eth4 (Ext switch port: 3) (Logical Port: 11) Link DOWN.
May 14 11:14:26 kernel: eth3 (Ext switch port: 2) (Logical Port: 10) Link DOWN.
May 14 11:14:28 kernel: eth4 (Ext switch port: 3) (Logical Port: 11) Link UP 1000 mbps full duplex
May 14 11:14:29 kernel: eth3 (Ext switch port: 2) (Logical Port: 10) Link UP 1000 mbps full duplex
May 14 11:15:15 kernel: ^[[0;33;41mBLOG ERROR blog_request :blog_key corruption when adding flow net_p=ffffffc02434b010 dir=0 old_key=0x60003845 new_key=0x600038b4
May 14 11:15:15 kernel: ^[[0m
May 14 11:20:39 kernel: ^[[0;33;41mBLOG ERROR blog_request :blog_key corruption when adding flow net_p=ffffffc0213186f0 dir=0 old_key=0x60003afb new_key=0x60003a6e
May 14 11:20:39 kernel: ^[[0m
May 14 11:22:23 kernel: ^[[0;33;41mBLOG ERROR blog_request :blog_key corruption when deleting flowfor net_p=ffffffc0213186f0
May 14 11:22:23 kernel: ^[[0m
May 14 11:23:52 kernel: ^[[0;33;41mBLOG ERROR blog_request :blog_key corruption when deleting flowfor net_p=ffffffc02434b010
May 14 11:23:52 kernel: ^[[0m
 
Can you please ssh into the router and cat /proc/net/bonding/bond0?

edit:
Ethernet ports 5/6 (which doesn't appear to make to "ext switch ports" properly).

Yeah, I noticed the int/ext switch ports don't line up. Watching /tmp/syslog.log on the router, the group of 2x2 ports on the left of the router marked as 1,2,5,6 come up as int switch port 1,0,3,2, in that order. The group of 2x2 ports on the right do not come up in that logfile at all, I'm guessing they are controlled separately.
 
Last edited:
Can you please ssh into the router and cat /proc/net/bonding/bond0?

Here's the file:
=======================================
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: down
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
bond bond0 has no active aggregator
 
One last thing to add. I now see both NAS NICs in network map with two different MAC addresses, yet the same IP address.
 
Huh, mine shows the same in the bond0 file. Have you confirmed greater than 1gb/s transfer rates? That's where it's not working for me. The client ports are getting differing aggregator IDs. I'm essentially seeing the same thing as this: https://access.redhat.com/solutions/160283 (I can't access the 'solution' there, it's behind a paywall).
 
Have you confirmed greater than 1gb/s transfer rates?

Testing this configuration has been a bit more difficult than I thought it would be. I'm using two clients. A PC, which is wired to the GT-AC5300 with 1Gbps ethernet and a Surface that is wireless on 5gHz at 866.5Mbps.

The NAS has two 1Gbps NICs connected to the GT-AC5300. I'm using Balance-alb aggregation (although I think Balance-tlb might be better).

When I have the aggregation turned on (I only activate it on the NAS), I am able to sustain a 10GB file transfer to the PC at about 100MB/sec while the surface transfers the same file, concurrently, at 30MB/sec. I see peaks at about 140-150MB/sec combined from both devices. But I'm only using the Windows 10 file transfer dialog box as my measurement. So while that sustained transfer rate is right at 1Gb, and the peaks are above, I don't have very precise measurements.

When I shut off the aggregation, the PC transfer rate drops to about 70MB/sec while the surface remains at about 30MB/sec (maybe slightly less).

So, while I'm not confident that this configuration is operating properly, it does appear to operate better when the aggregation is turned on, than when it isn't. So I'm sticking with it. I think the only time I'll notice a difference is when I have multiple clients performance read/writes at the same time.
 
I got this response to my ticket followup:
---
Thank you for the information.

I already read your previous response and we already escalated this to our higher management and to our hardware team for further investigation.

We apologize that this feature is not working properly on your GT-AC5300.

Thank you for the feedback.
---

I guess that means they're aware of an issue..? We'll see..
 
I just found that under Traffic Analyzer > Traffic Monitor there are tabs for "Bonding (LAN1)" and "Bonding (LAN2)." I don't recall seeing those before. I shut off the port trunking on my NAS, but these tabs were still present on the Traffic Monitor.

However, I don't see any traffic being measured by Traffic Monitor when I have the port trunking turned on.

I wonder if these tabs were present prior to the most recent firmware update?
I wonder if this Traffic Monitor isn't properly counting the traffic for the aggregated links, as it doesn't appear to be the case.
 
Nice find, I don't remember those before. Regardless, they don't show any traffic at all. I think aggregation is just broken on this thing. Hopefully it's fixed soon.
 
PC0: linux host with two NIC's (standard kernel module 'bonding' was used, 802.3ad mode).
PC1 and PC2: win hosts with one NIC.
Router: default bonding settings, FW 3.0.0.4.382.12184.
Tests: simultaneous TCP transfers from PC1 and PC2 to PC0 and backwards. Full duplex speed is up to 3.6-3.7 Gbit/s.
NB: you should connect at least one single-NIC client to LAN1 or LAN2 router port. All ports LAN3/4/7/8 are connected to other ifaces through single gigabit link. That's not so obvious.
 
Last edited:
Urgh, was not aware of the 1/2/5/6 <-> 3/4/7/8 interconnect being limited to 1Gb, I'll have to play with it tonight. Thanks.
 
Having the same problem with LAG from GT-AC5300 to Synology DS1513. Did you ever get this to work or received a response from Asus?
 
Doesn't work on my setup either. GT-AC5300 connected to Synology DS918+.

802.3ad is enabled on both devices, Synology says that it failed to establish a connection; the router log says bonding is enabled but no bond established. Really annoying.

Balance XOR option works but I prefer LACP.
 
Doesn't work on my setup either. GT-AC5300 connected to Synology DS918+.

802.3ad is enabled on both devices, Synology says that it failed to establish a connection; the router log says bonding is enabled but no bond established. Really annoying.

Balance XOR option works but I prefer LACP.
I have the same hardware as you and obviously having the same problem.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!

Members online

Top