RT-AX86U - Jumbo frames for 2.5GbE port.

Spc

Occasional Visitor
Hey everyone,
I am testing out new 2.5Gbit/s links.

I am using RT-AX86U with latest merlin and jumbo frames are enabled, however firmware does not specify how big are jumbo frames, are they 9k ? or 16k ?
USB dongle from ASUS i am using is ASUS usb-c2500, which supports 16k jumbo frames:

1662217103251.png


Asus USB-C2500 also supports 16k jumbo frames in linux:
1662218503701.png


ASUS usb-c2500:
Chip1Realtek RTL8156B

RT-AX86U uses:
ETH chip3Realtek RTL8226B

If i check with ifconfig all interfaces are 1500 bytes, no sign of 9000 bytes:
Code:
bond0     Link encap:Ethernet  HWaddr FC:34:97:8B:5E:C8
          UP BROADCAST RUNNING ALLMULTI MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:10175872 errors:0 dropped:2729 overruns:0 frame:0
          TX packets:2754910 errors:0 dropped:461 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:13013747984 (12.1 GiB)  TX bytes:497796580 (474.7 MiB)

br0       Link encap:Ethernet  HWaddr FC:34:97:8B:5E:C8
          inet addr:192.168.0.35  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING ALLMULTI MULTICAST  MTU:1500  Metric:1
          RX packets:647988 errors:0 dropped:32530 overruns:0 frame:0
          TX packets:4290 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:50536958 (48.1 MiB)  TX bytes:1018450 (994.5 KiB)

eth0      Link encap:Ethernet  HWaddr FC:34:97:8B:5E:C8
          UP BROADCAST ALLMULTI MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr FC:34:97:8B:5E:C8
          UP BROADCAST ALLMULTI MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth2      Link encap:Ethernet  HWaddr FC:34:97:8B:5E:C8
          UP BROADCAST ALLMULTI MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth3      Link encap:Ethernet  HWaddr FC:34:97:8B:5E:C8
          UP BROADCAST RUNNING ALLMULTI SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:6498690 errors:0 dropped:24 overruns:0 frame:0
          TX packets:2618277 errors:0 dropped:457 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:8161966918 (7.6 GiB)  TX bytes:473784848 (451.8 MiB)

eth4      Link encap:Ethernet  HWaddr FC:34:97:8B:5E:C8
          UP BROADCAST RUNNING ALLMULTI SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:3677182 errors:0 dropped:58 overruns:0 frame:0
          TX packets:136633 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4851781066 (4.5 GiB)  TX bytes:24011732 (22.8 MiB)

eth5      Link encap:Ethernet  HWaddr FC:34:97:8B:5E:C8
          UP BROADCAST RUNNING ALLMULTI MULTICAST  MTU:1500  Metric:1
          RX packets:2046568 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7991081 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:190780625 (181.9 MiB)  TX bytes:10145018265 (9.4 GiB)

eth6      Link encap:Ethernet  HWaddr FC:34:97:8B:5E:C8
          UP BROADCAST RUNNING ALLMULTI MULTICAST  MTU:1500  Metric:1
          RX packets:6666 errors:0 dropped:0 overruns:0 frame:332272
          TX packets:584446 errors:48 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1369515 (1.3 MiB)  TX bytes:64432145 (61.4 MiB)
          Interrupt:48

eth7      Link encap:Ethernet  HWaddr FC:34:97:8B:5E:CC
          UP BROADCAST RUNNING ALLMULTI MULTICAST  MTU:1500  Metric:1
          RX packets:193668 errors:0 dropped:21 overruns:0 frame:0
          TX packets:916936 errors:0 dropped:17574 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:36115988 (34.4 MiB)  TX bytes:443933244 (423.3 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING MULTICAST  MTU:65536  Metric:1
          RX packets:122205 errors:0 dropped:0 overruns:0 frame:0
          TX packets:122205 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:27457103 (26.1 MiB)  TX bytes:27457103 (26.1 MiB)

lo:0      Link encap:Local Loopback
          inet addr:127.0.1.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING MULTICAST  MTU:65536  Metric:1

spu_ds_dummy Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          UP RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

spu_us_dummy Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          UP RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


I have enabled jumbo frames in UI:
1662216850932.png


Does anyone know if merlin caps jumbo frames to max 9k or can we get more than 9k ?


Thanks.
 
Last edited:

bbunge

Part of the Furniture
You may want to reconsider using jumbo frames. While working well on your LAN, packets going to the internet will be broken up to satisfy the MTU of your provider hardware. With jumbo frames enabled this is done by the router. Even with the standard Ethernet packet the frames can still be broken up.
Do you want to add extra load to your router?
 

Spc

Occasional Visitor
You may want to reconsider using jumbo frames. While working well on your LAN, packets going to the internet will be broken up to satisfy the MTU of your provider hardware. With jumbo frames enabled this is done by the router. Even with the standard Ethernet packet the frames can still be broken up.
Do you want to add extra load to your router?
My router is Windows server 2022 on Intel® Core™ i5-6400 with 20 GB RAM and Intel Ethernet Server Adapter I350 - T4 v2.
I have 2Gbit LACP to internet and 2Gbit/s LACP to local switch connection on this router.

I am using RT-AX86U only as access point / switch.
I also tested jumbo frames to internet, because i am using direct connection to dual channel fiber (FTTH), i can use up to 9k MTU, it goes through without fragmentation.
I can ping with 9k MTU without any problems and it doesn't fragment.
 
Last edited:

heysoundude

Part of the Furniture
It's my (possibly mistaken) understanding that for pure end-to-end IPv6 connections, each end of the link will negotiate frame size...so unless you're only communicating with v6 servers, jumbo frames will be a burden on your router, and bottleneck your network to an extent as @bbunge suggested.

Maybe this will help: https://www.smallnetbuilder.com/arc...o-know-jumbo-frames-in-small-networks?start=0
think about how much more capable your router is today vs 15 yrs ago when that article was written
 
Last edited:

Spc

Occasional Visitor
Yea i think ipv6 has static MTU, ipv4 on other hand can be changed.

Example:
20 LAN IPv4 16114 20 Enabled Connected ActiveStore
20 LAN IPv6 1452 20 Enabled Connected ActiveStore

Only IPv4 is affected by driver mtu change.
IPv6 always stays at 1452 as shown.

You can test this example.

1. Set MTU to 9000 in windows driver.
2. Disable your network NIC in windows.
3. Enable your NIC in windows
4. As soon as you enable your NIC, type: Get-NetIPInterface in Windows powershell.

IPv6 and IPv4 will be 9000 MTU.
After a minute or so windows will adjust MTU of IPv6 to 1452 automatically.


All my switches have support for 10240 MTU.
10240 is enabled on all ports in my local network on switches.
 
Last edited:

Morris

Very Senior Member
In IP V4, the end points should negotiate the MTU. If your local LAN works best with an MTU of say 9k, then use that. Communications to hosts on the internet with the default MTU will negotiate with your hosts and the will chose the smaller MTU. I don't recall the meninism when hosts at both end support jumbo frames and ones in the middle don't yet both hosts will still be able to communicate.

Morris
 

Morris

Very Senior Member
I found this on internet:
View attachment 44057

https://www.jeffgeerling.com/blog/2020/testing-25-gbps-ethernet-on-raspberry-pi-cm4

So jumbo frames should get us improvement in speed.
I am talking about direct LAN connection between computers, not WAN port.

First off, the graphs appear to be for a Rasberry PI. If you or more of your hosts is a PI and you are using the same adapter he is then you have valid info. That is if you application behaves as his test one did.

Jumbo frames were critical in the early days of high speed networking. Today with controller stack offload and/or very fast CPUs it's not necessarily the case. As most modern CPUs and NICs are both capable of keeping the rotate window stuffed, even with small packets, the difference in performance is limited to the delay for the inter record gab and there tiny. Now if you are using Jumbo Frames and for any reason a transmission is needed, that's going to take a lot longer than 10 inter record gaps and this is just for the one frame, never mind an entire rotate window or on average 1/2 the rotate window. If line quality is not perfect and/or one of the hosts experiences buffer overruns or underruns, then a jumbo frame will slow throughput significantly. Only large file transfers and emulations such as ISCSI will benniffit and as I pointed out at the beginning not as much as it used to help.

Morirs
 

ColinTaylor

Part of the Furniture
So jumbo frames should get us improvement in speed.
I am talking about direct LAN connection between computers, not WAN port.
Bear in mind that most of his efforts were in overcoming the problems with his setup and the CPU limitations of the Pi. Once he did that he ended up with roughly the numbers you would expect for non-jumbo vs. jumbo frames, i.e. 2330 Mbps vs. 2480 Mbps. So about a 6% increase in throughput. Measurable but not exactly game changing.
 

Morris

Very Senior Member
Bear in mind that most of his efforts were in overcoming the problems with his setup and the CPU limitations of the Pi. Once he did that he ended up with roughly the numbers you would expect for non-jumbo vs. jumbo frames, i.e. 2330 Mbps vs. 2480 Mbps. So about a 6% increase in throughput. Measurable but not exactly game changing.
I've seen even less with data center gear. It's not cut and dry
 

ColinTaylor

Part of the Furniture
I've seen even less with data center gear. It's not cut and dry
Same here. In real life there are usually other factors that affect throughput, as Jeff found with his equipment.
 

Spc

Occasional Visitor
Depends on device and cpu, low speed cpus should see increase with jumbo frames.
Also if you are rendering video, cpu is at 100% and you're copying files while rendering jumbo frames should help out.
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top