What's new

Solved Lacp

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

It is possible to enable more then the first two ports to use lacp 802.3ad? I was reading a tutorial and it only says two ports are only possible. My GT-AX11000 can already do the first two ports without any scripts. https://github.com/RMerl/asuswrt-merlin.ng/wiki/Link-Aggregation

Don't think that's something that could be done with just firmware or scripting, would require hardware (or at the very least, microcode) support.

TP-Link 8 port smart switch can do up to 4 ports in LAG (not sure if they support LACP or not but anything that supports LACP should be able to do LAG). They usually run around $25.
 
In factory firmware, it's hard coded for how the switch is configured inside the Router/AP...

Keep in mind that with 802.3ad, ethernet rules apply, so it doesn't double the link speed, but it does add additional bandwidth...

In easy terms - it's much like being at the checkout stand at the supermarket - each stand can only process a customer at a given speed, but opening another checkout, that adds additional bandwidth, but each checkout can still only process so much...

Link Aggro has been oversold as a checkbox feature, but it's marketing.
 
In factory firmware, it's hard coded for how the switch is configured inside the Router/AP...

Keep in mind that with 802.3ad, ethernet rules apply, so it doesn't double the link speed, but it does add additional bandwidth...

In easy terms - it's much like being at the checkout stand at the supermarket - each stand can only process a customer at a given speed, but opening another checkout, that adds additional bandwidth, but each checkout can still only process so much...

Link Aggro has been oversold as a checkbox feature, but it's marketing.
Should do both if you use it with smb multi link. If you have a pc with multiple nics/ip addresses to a switch then from the switch use lacp into a server then you should get proper load-balancing. Was hoping to use the router opposed to a managed switch.
 
Should do both if you use it with smb multi link. If you have a pc with multiple nics/ip addresses to a switch then from the switch use lacp into a server then you should get proper load-balancing. Was hoping to use the router opposed to a managed switch.

I don't think any of the asus support more than 2 ports. TP link supports 4 groups of up to 4 ports each. Honestly it is 10x the switch than the basic one in the Asus anyway and only $25 to $30 for an 8 port one.

If you just want to use SMB Multilink you do not need LAG, you just plug in the multiple NICs to the same switch. SMB is an OS NIC teaming technology and works on any switch (as far as I know). But it still isn't a perfect combination of two 1g into a 2g. It also load balances sessions.

From what I recall SMB Multilink does carve a single file transfer into multiple sessions to help load balance, but it isn't per-packet (per packet load balancing is very intensive on any network device as it totally bypasses most hardware acceleration). It should result in nearly saturating the 2G though, as long as the NICs on either end support RDMA.

If all you want is to use SMB Multilink between two supported devices, you should not need LACP or LAG, just dual NICs that support it on both ends.

If you want SMB multilink to communicate with LAG, that won't work, best you can hope for in that case is two simultaneous transfers will use the two different links, but I suspect if SMB doesn't see the other end as having support, it won't even do that much.
 
I don't think any of the asus support more than 2 ports. TP link supports 4 groups of up to 4 ports each. Honestly it is 10x the switch than the basic one in the Asus anyway and only $25 to $30 for an 8 port one.

If you just want to use SMB Multilink you do not need LAG, you just plug in the multiple NICs to the same switch. SMB is an OS NIC teaming technology and works on any switch (as far as I know). But it still isn't a perfect combination of two 1g into a 2g. It also load balances sessions.

From what I recall SMB Multilink does carve a single file transfer into multiple sessions to help load balance, but it isn't per-packet (per packet load balancing is very intensive on any network device as it totally bypasses most hardware acceleration). It should result in nearly saturating the 2G though, as long as the NICs on either end support RDMA.

If all you want is to use SMB Multilink between two supported devices, you should not need LACP or LAG, just dual NICs that support it on both ends.

If you want SMB multilink to communicate with LAG, that won't work, best you can hope for in that case is two simultaneous transfers will use the two different links, but I suspect if SMB doesn't see the other end as having support, it won't even do that much.

So smb multi channel only care about the number of nic connections you have client side. Server link speed well it does matter doesn’t play a factor because multiple ip connections can occur on one link into the server; where link speed plays into it is overall bandwidth so you don’t saturate one link. Max channels I believe windows supports is 4 links. However you can’t use mix matched nic speeds because it choses the fastest connection over multi channel.

I’m trying to change my network topology to increase bandwidth to my server. The example I gave was unrelated to my actual network topology, but what you said makes sense.

Will probably just go the route of getting a 10gbe nic for the pc and server and link them directly. Server has x6 1000gbe ports just figured I might be able to utilize them better for the rest of my network. Currently I use lacp from server to the routers first 2 ports. But adding more doesn’t math out because of bottlenecks in the router itself. Even if I linked 4 1000gbe ports to the router the devices that live on mesh and to my unmanaged 2.5gbe switch would still only get at most 2.5gbe because the mesh router on the other end can only receive at most 2gbe. And the other devices on the routers 2.5gbe port can’t get more than 2.5gbe.

So yeah. Bottlenecks.

If somewhere down the line 10gbe switches becomes cheaper I’ll get one of those and add 10gbe nic’s to the supported devices.

2.5gbe has worked well so far, but always room for improvement.
 
Last edited:
So smb multi channel only care about the number of nic connections you have client side. Server link speed well it does matter doesn’t play a factor because multiple ip connections can occur on one link into the server; where link speed plays into it is overall bandwidth so you don’t saturate one link. Max channels I believe windows supports is 4 links. However you can’t use mix matched nic speeds because it choses the fastest connection over multi channel.

I’m trying to change my network topology to increase bandwidth to my server. The example I gave was unrelated to my actual network topology, but what you said makes sense.

Will probably just go the route of getting a 10gbe nic for the pc and server and link them directly. Server has x6 1000gbe ports just figured I might be able to utilize them better for the rest of my network. Currently I use lacp from server to the routers first 2 ports. But adding more doesn’t math out because of bottlenecks in the router itself. Even if I linked 4 1000gbe ports to the router the devices that live on mesh and to my unmanaged 2.5gbe switch would still only get at most 2.5gbe because the mesh router on the other end can only receive at most 2gbe. And the other devices on the routers 2.5gbe port can’t get more than 2.5gbe.

So yeah. Bottlenecks.

If somewhere down the line 10gbe switches becomes cheaper I’ll get one of those and add 10gbe nic’s to the supported devices.

2.5gbe has worked well so far, but always room for improvement.

For the performance and throughput you're looking for (is it actually needed?) you need to do a 2-tier approach. Buy appropriate switches for your wired devices or you "core" if you will, the switches in the Asus are essentially unused. The two Asus (second tier, or your "edge") plug into your switches to serve only wireless clients and your internet connectivity. Your wireless and internet will be limited to 2 or 2.5 gig but that should be plenty for those, your wired LAN can scale as much as you are willing to pay for.

You can get a switch(es) that can LAG 8x1G connections, or 4x2.5G connections, but you then need to make use of SMB or another NIC teaming technology to create enough flows to take advantage of it. To truly load balance well, both the client and server (or both clients) need to support RDMA and have SMB set up. That technology doesn't really care about the physical connection as much as having multiple IPs to be able to create multiple flows (which the LAG can then load balance across multiple links). But with SMB you don't need LAG to the server or client, you can just leave it as a bunch of totally independent connections with multiple IPs, in fact LAG would actually defeat some of the benefit of SMB.

In reality, it is always better to have one big pipe than multiple smaller ones attempting to load balance. 10G NICs in the two machines in question is probably your most economical choice at this point (though still pretty pricey for decent ones that can actually push 10G), you just have to do a bit of extra work to make sure traffic between the machines connects over that 10G direct connection and not via the slower connection to the Asus. Easy enough to do, but there can be some quirks. As long as you give them static IPs in a different subnet for the 10G NICs and assign a unique hostname (via the hosts file, or a script to add it to the Asus DNS) to that IP, and ensure you use the hostname (or IP) for stuff between those two machines, it should work for most stuff, at the very least stuff that needs that much bandwidth. Some processes will probably still use the slower link to talk between the machines but that's just background stuff that doesn't need much b/w. In my experience you can never get everything to use that dedicated link, there's always some weird thing that wants to go the other way, but like I said, it isn't anything that needs much b/w.

But again I have to ask, is >2G or even >1G even something you do that often, or that the reduced transfer time would justify the cost?
 
Thanks for the info. I keep steam games on the server, vm’s with proxmox, some movies. The biggest movement of files are between my main pc the server, and my backup NAS. I have 2 gbe to my office computer over mesh, and about 25 other devices in the house either wifi or wired but they minimally use bandwidth. Have 3 other laptop pc’s as well but rarely in heavy use. 2.5gbe works well don’t get me wrong a single 1000gbe connection can move theoretically 125MB/s (usually less on a single client smb) so 2.5gbe can move upwards I think 312MB/s theoretically. That’s assuming your ssd raid read/write speeds and or cache can keep up and all things ideal.

So yeah a 10gbe switch is overkill, but point to point makes sense. A 10gbe switch is just like looking at an expensive car it’s unnecessary but damn it go burrrr. Since I don’t have a massive amount of connections when I describe a network switch I’m thinking more of a smaller one maybe consisting of sfp+ to rj45 or one with 2/4 10gbe rj45 ports. Would love to tinker with running optical, but way outside my needs.
 
Last edited:
Thanks for the info. I keep steam games on the server, vm’s with proxmox, and I run a private plex server for family and friends often moving 1080p/4K files in bulk using choeasycopy or standard file transfer if it’s not a lot. The biggest movement of files are between my main pc the server, and my backup NAS. I have 2 gbe to my office computer over mesh, and about 25 other devices in the house either wifi or wired but they minimally use bandwidth. Have 3 other laptop pc’s as well but rarely in heavy use. 2.5gbe works well don’t get me wrong a single 1000gbe connection can move theoretically 125MB/s (usually less on a single client smb) so 2.5gbe can move upwards I think 312MB/s theoretically. That’s assuming your ssd raid read/write speeds and or cache can keep up and all things ideal.

So yeah a 10gbe switch is overkill, but point to point makes sense. A 10gbe switch is just like looking at an expensive car it’s unnecessary but damn it go burrrr. Since I don’t have a massive amount of connections when I describe a network switch I’m thinking more of a smaller one maybe consisting of sfp+ to rj45 or one with 2/4 10gbe rj45 ports. Would love to tinker with running optical, but way outside my needs.

There's always infiniband too 🙂
 
N.B. It's 1GbE not 1000gbe ;) (unless I missed a massive leap in technology!).

It's being piloted. Probably not in OPs house though.

At this point it is all just a matter of how many parallel wavelengths can you MUX together. They pretty much can't make the light go blinky blinky any faster or make the encoding any more complex. Though it is now possible to do 100G on a single lane (in a lab), so there is still progress being made there too I guess, just a lot longer between breakthroughs in that area, keep running up against pesky laws of physics.
 
Similar threads
Thread starter Title Forum Replies Date
D Solved Samba, LACP & WireGuard. Question. Asuswrt-Merlin 5

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top