What's new

10Gbe to 1Gb VERY slow transfer rate

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

MaxLTV

New Around Here
I'm pulling my hear out with this one. Hope someone can point me in the right direction. Luckily, I isolated the issue really well (had to bring equipment physically into the same room and connect to the same router).

Components in question - 2 different Asus 10 Gbe NICs (same model) and 2 different Netgear GS110EMX switches and 3 Win 10 PCs. The problem is the same regardless of what specific hardware is used, so it's probably not faulty devices (moving around switches and NICs changes nothing). All firmware, drivers etc. updated to the latest, cables certified etc.

In a nutshell, when a 10Gbe (copper) NIC is connected to a 10Gbe port on a switch and is sending data in such a way that it passes through anything that's not 10Gbe or any form of multigig, the speed drops to 100-500Mbit (fluctuating up and down between these two boundaries). Receiving data from 1Gb devices in the same set up is fine (full 1Gb).

In more details, the simplest chain:
10Gbe NIC -> 10Gbe port on a switch -> 10Gbe or 1Gb port (does not matter) on the same switch -> 1Gb NIC - results in transfer speed floating between 100 and 500Mbit. The speed in reverse direction (From 1Gbe to 10Gbe) with the same components is normal maxing out 1Gbit NIC.

Similarly, 10Gbe NIC -> 10Gbe Switch ->1Gb switch -> 10Gbe switch -> 10Gbe NIC results in slowness in both directions (way sub 1Gbe speeds). Removing 1GB thing in the middle, makes it all work at full 10Gbe transfer speed. Forcing 10Gbe components to 1Gb mode results in normal 1Gb speed even with the same 1Gb switch in the middle.

When same 10Gbe NICs are forced to 1Gb connection speed or connected to 1Gb ports, the speed is stable maxed-out 1Gb (so no problem)

10Gbe components connected to each other (10 Gbe Nic -> 10Gbe switch -> 10Gbe Nic) results in stable full-speed 10Gbe. So it's not faulty cables or components.

So the problem is 99% definitively the 10Gbe -> 1Gb transfer.

I think that 10Gbe NIC may be sending packets or something else of some sort that trips up 1Gb equipment. I disabled Jumbo packets, just in case. It seemed to make things a little faster but hard to say.

Any ideas? Please help....
 
Plot thickens - iPerf-reported speed is normal in all cases. The slow speed affects file transfers though.
 
Could 1 of your machines be the limiting factor, not the network. Obviously iPerf is testing also but is there something different, maybe hard drive access?
 
Could 1 of your machines be the limiting factor, not the network. Obviously iPerf is testing also but is there something different, maybe hard drive access?
Does not look like it. When I plug two machines into 10Gbe sockets on a switch, file transfers are 650MB/s both ways. So the machines are capable. When one is plugged into 1Gb slot, transfers are 120MB/s from the slow socket machine (normal) but 20-60MB/s from the fast socket one, fluctuating up and down gradually. When both in slow slots, I get expected 120MB/s in both directions.
 
So iPerf worked at the expected speeds? And that is all UDP traffic? Just TCP traffic flows are causing issues....so is there an MTU and/or TCP window size problem haunting you?
 
So iPerf worked at the expected speeds? And that is all UDP traffic? Just TCP traffic flows are causing issues....so is there an MTU and/or TCP window size problem haunting you?
TCP in iPerf is normal too - 6.5Gbit between two 10Gbe clients and 960Mbit in any other combination, including the one that has file transfer speed issues. So IPerf is always normal, but file throughput is not. I tried manually setting TCP windows just in case, but I'm not sure if I did it right because it changed nothing. The symptom is kind of similar to duplex mismatch but I do not think duplex mismatch is possible in this set up.
 
This sounds like a switch issue then. You have proven both sides of the transaction can handle 10Gbps speeds. You have proven when on the same side of the switching fabric, 10Gbps or 1Gbps, the clients behave. It is only when crossing the 10Gbps to 1Gbps on the switch itself you see the issue.

Do you have another switch to test with?
 
I fixed it by accident. I disabled Receive Side Scaling in driver settings of the NIC and now everything works fine, with all the other advanced features and speeds in line with what's expected. I do not understand how/why it worked but the solution worked for two different PCs and for both Aquantia generic and Asus-branded drivers. Stumbling upon this fix involved a bunch of other steps, where I tried another switch and was able to almost fix it with Global Flow Control mode enabled on that switch (speed got to 95% of expected) but then disabling RSS fixed everything even without Flow Control (which is not a good thing to have set globally). Another thing that seems buggy is the Receive Side Coalescing Threshold - the default "adaptive" really slows things down on 10Gb and 5Gb connections but setting it to Off, Low or Moderate returns everything to normal.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top