What's new

10 gb network (computer to computer), very slow (~4Gb iperf3, 50 MB SMB/CIFS).

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Roveer

Occasional Visitor
So I tried to move into the 10Gb world by putting two Mellanox MNPA19-XTR cards in my server and backup storage computer. They connect, but the speeds are very slow.

iperf does not seems to work (connection refused), but iperf3 does and gives 4-4.5 Gb's but SMB\CIFS transfers are only 50MB/s (which is slower than my gigabit connection.

I've set all kinds of tuning parameters and nothing improves speed.

If you have a 10Gb network can you run an iperf test on the card itself and report the results. I'm not sure if my testing methodology is valid, but here's what I'm doing.

Strangely if I try and run the same iperf3 test on my gigabit nic I get basically the same results. Here are the results.

If I run an iperf server on my local machine binding to my gigabit nic : iperf3 -B 172.16.1.117 -s
Then run an iperf on that same machine pointing to the nic: iperf3 -B 172.16.1.117 -c 172.16.1.117
I get the following results: (that's a 1Gb network card, yet it's showing 6Gb in what should be a loopback test.

iperf172_zpslweezvse.jpg


If I run the same test on my 10Gb card: iperf3 -B 10.10.10.10 -s and iperf3 -B 10.10.10.10 -c .10.10.10.10 I get basically the same results (actually slower)

iperf10_zpskekliy0t.jpg


So I'm not even sure what this is measuring? My local pc motherboard bandwidth?

I've had these 10Gb cards in a few different pc's including my xeon server and I have not seen a speed better than 6+Gb. More troubling is that SMS transfers range in the 50MB/s range which is far worse than my gigabit NIC's which yield me 100+ MB/s. I'm not sure what's going on, but I'm determined to get to the bottom of it. My 10Gb experiment has turned into 1/2GB result.

Roveer
 
Last edited:
My bad, Yes, SMB/CIFS.

I ran up a windows 10 i7-3770 in latest Mellanox driver to an older machine and was getting 500+ MB/s transfer but it tails off to 70 MB/s towards the end. Not sure if I'm filling buffers on the old machine. I'm going to re-install into my xeon and upgrade the backup storage machine to win10 and try again. Wondering if this might be a problem with older hardware? I was trying to use iperf to eliminate disk bottlenecks but the 5Gb iperf results seemed so low compared to what I had seen in videos. Most of them were iperf'ng at 9Gb
 
Last edited:
i'm coming to understand that I might have insufficient motherboard hardware. One of the MB's says it's PCIE 8x but does not say 2.0.

I'm going to try and cobble together 2 machines that have pcie 2.0 8x slots, install win 10 on both and try again.
 
You keep flipping between bps and Bps...please pick one and stick with it otherwise it makes following your speed issues very challenging.

10Gbps = 10 Gigabits per second
10GBps = 10 Gigabytes per second
500MB = 500 Megabytes
500Mb = 500 Megabits

As for iperf on loopback...not exactly sure how that will really work. I have never tried iperf in that manner.

I just ran a similar test on my Linux box and it is reporting over 20Gbps. This is a much older Core2 E8500 with an Intel Pro/1000 NIC.

upload_2017-12-18_10-30-22.png
 
I ran the same iperf test on my pfSense box which only has 1gb nics and it came back at 19Gb/s so I think those are bus speeds we are seeing. My pfSense is on an i7-3770 cpu.

For now I'm going to go with old hardware being my problem. I have a i5 that has a pcie 3.0 16x slot and an i7 that has a 2.0 8 or 16x (can't remember). I'm going to run them up on win10 and try the test again. Let's see how this performs on win10 on halfway decent hardware. Thank you for your post.
 
I ran the same iperf test on my pfSense box which only has 1gb nics and it came back at 19Gb/s so I think those are bus speeds we are seeing. My pfSense is on an i7-3770 cpu.

For now I'm going to go with old hardware being my problem. I have a i5 that has a pcie 3.0 16x slot and an i7 that has a 2.0 8 or 16x (can't remember). I'm going to run them up on win10 and try the test again. Let's see how this performs on win10 on halfway decent hardware. Thank you for your post.
Old hardware? Did you not see what hardware I am running? My box is 8+ years old.

My tests were done on a Dell Optiplex 760 with a Core2 DUO E8500. And I lied...it isn't a true Intel PRO/1000...it is actually the on-board Intel NIC.

Sent from some device using Tapatalk
 
Last edited:
If this helps... Intel Braswell (Pentium N3700) with realtek gigabit nic...

It really doesn't test the NIC, just how fast the chip/chipset can move things across the memory bus.

Code:
$ iperf3 -B 127.0.0.1 -c 127.0.0.1
Connecting to host 127.0.0.1, port 5201
[  4] local 192.168.1.20 port 60823 connected to 192.168.1.20 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.20 GBytes  10.3 Gbits/sec    0   2.12 MBytes       
[  4]   1.00-2.00   sec  1.29 GBytes  11.1 Gbits/sec    0   2.56 MBytes       
[  4]   2.00-3.00   sec  1.26 GBytes  10.8 Gbits/sec    0   3.43 MBytes       
[  4]   3.00-4.00   sec  1.21 GBytes  10.4 Gbits/sec    0   4.62 MBytes       
[  4]   4.00-5.00   sec  1.22 GBytes  10.5 Gbits/sec    0   5.12 MBytes       
[  4]   5.00-6.00   sec  1.18 GBytes  10.1 Gbits/sec    0   6.18 MBytes       
[  4]   6.00-7.00   sec  1.20 GBytes  10.3 Gbits/sec    0   6.49 MBytes       
[  4]   7.00-8.00   sec  1.19 GBytes  10.2 Gbits/sec    0   9.80 MBytes       
[  4]   8.00-9.00   sec  1.16 GBytes  9.93 Gbits/sec    0   9.80 MBytes       
[  4]   9.00-10.00  sec  1.19 GBytes  10.2 Gbits/sec    0   9.80 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  12.1 GBytes  10.4 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  12.1 GBytes  10.4 Gbits/sec                  receiver
 
Old hardware? Did you not see what hardware I am running? My box is 8+ years old.

My tests were done on a Dell Optiplex 760 with a Core2 DUO E8500. And I lied...it isn't a true Intel PRO/1000...it is actually the on-board Intel NIC.

Sent from some device using Tapatalk

I get it. Jumping back to my file transfer test when I had the 10Gb cards connected were on a machine that only had a SATAII interface which maxes out at 300MB/s. Like I said, I'm going to go after some better hardware for my testing. With such old hardware there are just too many variables to weed out. The motherboard in one of those systems was purchased new in 2007. 10 years old. CPU E6700 (39% slower than yours), No PCIE 2.0, and SATA II. Also only 4gb of memory. Like I said, too many variables. Let's see what happens on newer hardware on win10.

Roveer
 
CPU E6700 (39% slower than yours), No PCIE 2.0, and SATA II. Also only 4gb of memory. Like I said, too many variables. Let's see what happens on newer hardware on win10.
Haha...didn't expect you to have older hardware than I did. Was basing my guess on your i7 comments only. If it matters....I just went and tested on my pfSense box which is an E4600 and the results were pitiful, however it was iperf2 since that is all that I found for pfSense.
 
for 10Gb/s there are 2 things to take into account. 1) 1/4 of your ram bandwidth, 2) The PCIe slot itself (how many lanes).

CPU busses are really fast that they easily exceed 10Gb/s. Its only when you're trying to create a 100Gb/s router that a lot more things start to matter.
 
A couple of things to check as this is a process I have been going through.
(desktop to NAS connection)

1. disable or remove antivirus and firewall. I use bitdefender and it was blocking the iperf activity of my NAS.
2. connect both together with a patch cable and remove routers/switches etc from the mix while you do base testing.

3. check which version of SMB you are using. 2.0 or 3.0 are best.

3. Run an Iperf loopback test to see what your system is capable of doing at each end.
eg open 2 CMD windows and in one run iperf -s, in the other iperf -c localhost. This should check ram and system performance.

PCIe 2.0 x4 is sufficient for full bandwidth of 10GBase-T.

On the windows side some aspects that were messing up my line speed:
1. enable jumbo frames
2. turn interrupt Moderation off adaptive and try 'off' or 'low' (this will increase CPU usage though). - this one particularly netted me an additional 300MB/s and I don't know why specifically. Adaptive should be fine.
3. Confirm that your NIC is syncing at 10GBe speed - you may need to force it.


Hope that gets you some way forward.
 
I think the answer to all of my questions on saturating my 1Gb ipsec was in front of me all of the time. SMB3. I had been doing some 10GB testing and not until I got to Win10 (Win8.1 does this too), did I get the advantages of SMB3 which among other things does multiple connections. The dips you see in the graph are most likely the crappy pc I had on one side of the vpn trying to do win10 updates at the same time I was uploading. I can hear the HD grinding away right now.

smb3_zpsusqdbfsi.jpg
 
This is a very interesting thread. I just tried a loopback iperf on my dual xeon 5160 system and it was just over 2Gbit/sec. That seems awful slow. Perhaps the version of iperf has something to do with it? (I usually have issues with iperf3 crashing or not working correctly.)
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top