What's new

How much of a difference do Intel NICs REALLY make?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Jesse B

Regular Contributor
I'm slowly building up my home network, as money permits. With the recent addition of my new switch (HP 1410-8G, it's great), I'm wanting to ensure I get maximum performance out of my network.

I haven't really run any real tests thus far (I unfortunately only have Cat5 laying around, waiting for the Cat6 I ordered to get here), but I'm not getting anywhere near Gbit speeds. Using my Cat5 and onboard NICs, I was getting ~250Mbit transferring ~700MB ISO's. It's a lot better than what I was previously getting, but I itch for more performance :D

Upgrading the network cables should help a tad, but I feel like a proper PCI-e NIC would make a whole world of difference. I always see the Intel NICs recommended, but I'm just curious what kind of a performance gain is typical with these before I drop $70 for my server and desktop.

Also, just for comparison:

My desktop has a Marvell 88E8056 NIC, running Ubuntu 10.10
My server has a Realtek 8111DL, running... well, nothing at this point in time, but something Linux/BSD based by tomorrow or the following day :D

Not sure what performance is like with either of those (especially under Linux), so that'll help in me making my decision as well.

Thanks,


- Jesse
 
I think the main reason Intel NICs are recommended is not only performance but also compatibility. Generally they are well supported out of the box with any OS. Also they generally provide consistent performance. With that said I have had a few onboard Marvell NICs that have great performance. Marvell NICs don't seem to be supported right out of the box in as many OSes but you can download drivers from their website for lots of OSes. I currently have a couple of computers that have Realtek NICs and they seem to work well in Windows. Support in Linux/Unix though is a bit rough from what I understand.

This thread might also help.

As for performance when transferring files it might not be your NICs that are holding you back. Have you first tested your raw network throughput using Iperf to see where you are at? I would be happy to try and help you narrow down your bottleneck. Just let me know.

00Roush
 
I think the main reason Intel NICs are recommended is not only performance but also compatibility. Generally they are well supported out of the box with any OS. Also they generally provide consistent performance. With that said I have had a few onboard Marvell NICs that have great performance. Marvell NICs don't seem to be supported right out of the box in as many OSes but you can download drivers from their website for lots of OSes. I currently have a couple of computers that have Realtek NICs and they seem to work well in Windows. Support in Linux/Unix though is a bit rough from what I understand.

This thread might also help.

As for performance when transferring files it might not be your NICs that are holding you back. Have you first tested your raw network throughput using Iperf to see where you are at? I would be happy to try and help you narrow down your bottleneck. Just let me know.

00Roush

Alright, that makes sense. I'll just wait to see where my bottle neck is first.

I'll run Iperf (after I install it and figure out how to use it) and post my results.

I also just installed Debian on my server. Not that it's really important ;)

Thanks for your help,


- Jesse
 
EDIT: I lied a little bit in my last post. On my server I'm running Proxmox VE, which is based off Debian. If you're not familiar with it, it's just a virtual environment, similar to something like ESXi. Saw it on an online show I watch, figured I'd try it as I'm not really using my server for anything until I can afford to buy some hard drives. I've got an Ubuntu Server 10.10 machine set up on it right now, which is what I'm installing iperf on. I picked the network adapter that is supposedly the best performing (virtio, I assume it's like bridged adapter in Virtual Box, not sure yet though).

EDIT 2: Removed the first part of the post, I'm an idiot. I was trying to compile iperf from source, when it was in the repositories all along. *Sigh*. I'll run some tests and post the results.
 
Last edited:
Alright, while I ran the tests and... excellent performance! I guess this is a good sign, I just need to determine how to get this in real-world situations ;)

https://lh4.googleusercontent.com/_.../AAAAAAAAAFQ/Qud5gHy4YAM/s1280/Screenshot.jpg

As you can see there, I got ~940Mbit. If there's other options you'd like me to run as well then I can do that as well, I just ran with the default settings.

Now, the question is, why are my transfer speeds so much slower, and how can I get them to move at/close to this speed?

Thanks,


- Jesse
 
I've tried using both SFTP and NFS for transferring files, and they're both getting me the same results. ~200Mbit +/- 50Mbit. Iperf seems to disagree with these results however.
 
So the thing to remember is that iperf measures maximum network performance. FTP, SMB, and NFS all use the network but also interact with other components that can limit performance such as hard drives.

Generally when I am looking for a bottleneck with file transfers I break things down into 3 categories that commonly limit performance. Network performance, disk performance (on both the client and server), and protocol/program performance.

I test network performance using iperf. Most of the time I use a larger window size (-w 64k) and test in both directions. (-r) If I am higher than 900 Mbps I would say I am good to go and move on. 800-900 Mbps and I maybe go over my network configuration to make sure everything is in order. Any lower than that and I would consider there to be a problem.

Disk performance can be tested in many ways. On Windows HD Tune, SiSandra, and CrystalDiskMark all can provide a good indication of hard drive performance. On Unix/Linux dd is commonly used as a quick and dirty method (not always accurate) to determine local hard drive performance. Other options are Bonnie++, Iozone, and Iometer. Basically you are looking to see what your maximum sustained read/write speeds are. If your maximum sustained read/write speed is 100 MB/sec realistically you could see close to this speed (~95 MB/sec) on file transfers if you have no other bottlenecks.

Protocol/program performance is a bit difficult as each OS can implement protocols/programs differently. I know FTP, SMB, and NFS are all capable of high performance but it depends on the version and implementation. Also a lot depends on the program that is actually doing the reading/writing of the data. Some programs were written to provide high performance reading/writing data on a local disk. If they try to read/write to a disk that is over the network performance may suffer greatly due to the higher latency. A good example of this is the standard drag and drop file copy in Win XP versus Win Vista/7. XP's file copy method is mostly designed for high performance copying files between local disks. Performance over networks is okay but suffers much more than Vista and 7. Vista SP1 and 7's standard file copy method has been streamlined and keeps multiple reads/writes in flight at one time to improve performance over networks. The easiest way I have found to test this category is to use what I know works. So in my case I always use Windows 7 as the client OS when testing SMB performance as I know it is capable of very high performance. Windows 7 also works well as an FTP client using Filezilla.

So in your case your network looks great. On to testing the hard drive speed in both the client and server. If that checks out then last would be trying to determine if there might be a problem with the protocol/program you are using to test with.

00Roush
 
Thanks for taking the time to type all that up :D

I ran iperf again with the arguments you recommended, here's the results:
https://lh6.googleusercontent.com/_ff-oJr7H2G8/TViRcKcNi7I/AAAAAAAAAFk/fbd_3-PNzto/iperf.jpg
926Mbit, so that's still all good.

I installed iozone on both my desktop and one of my server VM's as well, and ran those. The naming for the tests it runs is a bit... unconventional, so unfortunately I don't know which test you're looking for. I've attached the outputs from both my desktop and server to this post. I don't expect any stellar results from them, they're older drives.

Thanks again,


- Jesse
 

Attachments

  • Iozone-Server.txt
    2.8 KB · Views: 285
  • Iozone-Client.txt
    3.4 KB · Views: 291
Iperf still looks good... the -r option should give you two results. One to the server and one back from the server. I think you need to move the -r option so it is after the -c option. I usually us a command line like iperf -c 192.168.0.2 -w 64k -r.

Those iperf results are a bit hard to read... I probably should have given you a command line to run. I generally use a command line similar to this:
IOZONE -Rab c:\iozone\results -i 0 -i 1 -+u -f e:\test.txt -q 64k -y 64k -n 512m -g 2g -z

If you want to know what all of the options do just do a google search for "iozone man". This particular command line will test read/write speeds using file sizes of 512 MB, 1 GB, and 2 GB using an access size of 64k.

Hopefully that can give results that make a bit more sense.

00Roush
 
Well, we may have discovered the culprit here (or at least something contributing greatly to it). The temporary drive I threw in my server is pretty slow.

KB reclen write rewrite read reread
524288 64 52884 94251 450041 420840
1048576 64 52530 77586 415956 418066
2097152 64 41991 77094 92649 94551

It's still ~2-times as fast as the writes I'm getting however. Does that sound like a reasonable outcome?

EDIT: Formatting sucks for the results, you still get the idea though. Also, my desktop (other computer I've been using for all these tests) has about 2-3 times the speeds for each test.
 
I would say you are probably right about the drive causing a bit of a bottleneck. Depending on how much RAM you have in your server you could expand the test to include larger file sizes to mitigate any cache effects. So if your server has 2 GB of RAM you should probably test up to 4 or 8 GB to see what your sustained speeds are. The -e option can also be used to help eliminate the effects of cache for the write tests.

Something else I was thinking about was what program are you using to test your transfer speeds? Are you just doing a drag and drop or are you using cp?

00Roush
 
I would say you are probably right about the drive causing a bit of a bottleneck. Depending on how much RAM you have in your server you could expand the test to include larger file sizes to mitigate any cache effects. So if your server has 2 GB of RAM you should probably test up to 4 or 8 GB to see what your sustained speeds are. The -e option can also be used to help eliminate the effects of cache for the write tests.

I have 4GB of RAM in there, so I could up the size quite a bit if I wanted to.

Something else I was thinking about was what program are you using to test your transfer speeds? Are you just doing a drag and drop or are you using cp?

00Roush

I've used Filezilla, as well as tried drag-and-drop using NFS. Both yield about the same speeds. Is there a better way to do this?

I do plan on putting in new hard drives probably next month, so I can live with the mediocre speeds for awhile. The drives I'm getting should be quite a bit faster than this ~3.5 year old drive.

Thanks again for all the help, I've learned quite a bit!


- Jesse
 
Filezilla and drag and drop work fine. I was just thinking whatever program you were using might be holding back performance. Generally I have found Filezilla to give good performance on Windows so I would say it shouldn't be a bottleneck. I am a little unsure of drag and drop performance in Ubuntu as older versions seemed to have low Samba performance. Then again it has been quite a while since I tried Ubuntu on the desktop. I am thinking I might trying doing an install of the latest Server and Desktop versions just to see what kind of performance I get.

Glad I can help. Let us know how is goes when you get the new drives.

00Roush
 
Alright, well I'll just stick with Filezilla and NFS for now. I haven't tried out Samba yet, as my laptop is the only Windows machine I have, and I generally just use Filezilla. I'm not going to do too much work until I get my server fully built and completed next month.

My order came in today (a full week early, score!), so my network is now all Cat6 rather than Cat5 (for the Gbit devices at least). I also did a bit of reading, and apparently the low performance of hard drives is common with what I'm using for virtualization, so that could explain why the tests I did had such poor results. I still need to get some drives next month anyways, as the single 500GB drive that's in there just won't cut it :D

I'll run all the benches again with the Cat6 and new drives as soon as I get them.


- Jesse
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top