What's new

jperf results: NIC faster in linux than WinXP?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!


Regular Contributor
Hi all.

Great site and great forum - this is my first post here. I have what I think is a driver issue in Windows XP.

I have a Gigabyte GA-G31-ES2L motherboard with on-board Realtek 8111C gigabit. I dual-boot Windows XP Pro 32-bit and Ubuntu 8.10 32-bit. My switch is the gigabit TRENDNet TEG-S80G. Check these jperf results:


Thats the screen grab from the jperf server running on my target system. I'm posting this screen grab because it's easier to compare tests. Note the graph. There are 4 tests there, with 4 different colors. All originated from the same client system. The Ubuntu 8.10 test is the white one = 895 Mb/s. The other three are all from the same client system, just running Windows XP instead.

The target jperf server is running Windows XP Pro 32 as well, if that matters.

Why would the Windows tests be half as fast? I grabbed and installed the latest LAN driver from Realtek's site, and achieved the same results.

Where should I start?
I bet it just due to your window sizes. I think linux defaults to a larger window than Windows does. Try using a 64k window size on both clients.

Thanks for the lead! I'll try that when I'm home from work.

Do you mean a multiple of MSS? 8760x2 = 17520?
Well it doesn't really matter that much but you probably will need at least a 32k window to see max bandwidth. Command line I use for Iperf is iperf -c -w 64k. I don't think it has to be a multiple of MSS. At least it didn't seem to make much difference when I tried different sizes.

@ 00Roush: spot-on suggestion! Even raising TCP Window Size to 32k saturates my gigabit network (~900 Mb/s).

There's a registry DWORD entry to set TCP window size globally - the unimaginative "TcpWindowSize". I've tried 32768 (32 x 1024) and 35040 (ethernet's MSS 8760 x 4). But neither of these settings persists through to iperf. I wonder if iperf defaults to it's own internal settings? Manually telling iperf to use 32k works perfectly, but without -w 32k it defaults to 8k.

Not that any of my disks are fast enough to deal with 900Mb/s anyway!

Thanks again for your help 00Roush!
Glad that worked. Yes you are correct about Iperf defaulting to its own setting. But as you have seen the default is much too small.

As for the registry setting... I recommend leaving it alone. To my knowledge XP is set to automatically tune window size as the default and from my experience no performance is gained from changing this setting. During local LAN testing I have found that XP usually uses window sizes of 60-64k. For internet connections it is possible that increasing window size can increase performance but so far I have not done much testing to see if it really does. If you really do want to change window size I recommend using TCPOptimizer as it calculates all of the correct values for you and allows you to easily go back to the defaults in the event things don't go well.

On a side note... Not sure what hard drives you have but some of the newer drives can do 900 Mb/s. 900 megabits/second equals 112.5 megabytes/second. 900 / 8 (8 bits in one byte) = 112.5 MB. Not sure if you knew that or not.

For now, I'm leaving the registry unchanged.

I have one fast drive: the new Seagate 7200.12 (500GB single-platter). I get pretty insane performance numbers out of IOzone (run locally):
Command: iozone -Rab output.wks -i 0 -i 1 -+u -y 64k -q 64k -n 32m -g 4G -z

120MB/s reads all the way out to 4GB files sizes. These new 7200.12 drives are amazing. I haven't tested it as a network target yet, but I plan to. It's n my desktop machine.

My server is pretty limited. It has a PCI Intel PRO/1000 GT. I stuggle to get more than 300 Mb/s our of it in iperf, although I haven't played with window sizes because I'm limited by hardware too. It has a WD green WD10EADS drive, which achieves 40-50 MB/s on its own (iozone run locally).

As an IOzone target, it achieves around 21-24 MB/s.

I'm fighting a number of demons on that system (my file server). It's an older Athlon 64 with 2x512 MB ram, the motherboard's two SATA1 ports are on the PCI bus, the gigabit NIC is on the PCI bus, sound is on the PCI bus, and Samba isn't known for speed. In sum, all this is cutting my networked read/write IO in half. Even at 20MB/s, it's plenty for streaming HD video, which is its main purpose.

I'm in the shopping/planning stages for a server to replace the above. It'll certainly have PCI-Express NIC and HD controllers! I'm trying to wrap by brain around Open Solaris (and derivatives) and ZFS. ZFS is the cat's meow, but I hate Solaris. Limited hardware compatibility, completely foreign syntax compared to debian/ubuntu, mediocre performance. I'm testing Nexenta currently, which combines the Solaris core with a debian user space in a GUI-free OS. I'm hoping that Oracle GPL's ZFS immediately.
Are you sure everything is on the PCI bus? Either way I think it should give a little higher performance than that. With an older Athlon XP/nForce 2 combo I recall seeing Iozone speeds in the 30-60 MB/sec range. But it has been a little while since I tested that setup so I could be off. You could also disable the sound if the computer is just being used as a server to cut down on the PCI bus traffic.

Let me know how it turns out with the seagate drive in the server. Wondering if it will make much difference.

Are you sure everything is on the PCI bus?
I'm pretty sure. This is using an old Asrock 939Dual-SATA2 board with the Uli M1567 southbridge. That chip only offers 2 sata-1 ports, which I believe are on the PCI bus. This MB has a JMicron chip for a single SATA2 port that IS on a PCIe lane, but I've never had decent stability using it.
You could also disable the sound if the computer is just being used as a server to cut down on the PCI bus traffic.
Yeah, but by "server" I really mean "server/secondary HTPC/internet-email-etc testing machine". :)
Let me know how it turns out with the seagate drive in the server. Wondering if it will make much difference.
The Seagate drive is for my workstation. My current server is more of a test project. I'm slowly building a shopping list for my next "real" server, which will be faster and much more power efficient. My current server idles around 75W, and that's too much for me for 24/7 usage.

I'm currently looking at:
With a little undervolting & underclocking, people are routinely hitting 25-30W ish systems with these older nVidia/AMD combos.

I'm still testing OS's though, and it's taking forever. So by the time I nail that aspect down, there might be better cost-effective MB/CPU combos available.
Last edited:
I just realized that you might be using Win XP on your server. Here is a copy of something I posted in another thread...

To get the best performance out of Win XP Pro windows needs to be set to use a large system cache when used as a server. Here are the steps from the Microsoft website.

1. Click Start, click Run, type sysdm.cpl in the Open box, and then press ENTER.
2. Click the Advanced tab, and then under Performance click Settings.
3. Click the Advanced tab, and then under Memory usage use one of the following methods:
• Click Programs if you use your computer primarily as a workstation instead of as a server. This option allocates more memory to your programs.
• Click System cache if your computer is used primarily as a server or if you use programs that use a large system cache.
4. Click OK to save preferences and close the dialog box.

Here is a link to a previous review of your motherboard... http://www.techpowerup.com/reviews/ASRock/939Dual-SATA2 From what I can tell the diagram shows everything is connected to your southbridge and therefore connected via a HT link instead of the PCI bus. So the only thing on the PCI bus looks to be your network card. If that is true I think you definitely could see better performance with your current hardware but it really depends on what OS you are using on the server and the client. Just my opinion though.

I completely understand about the power draw at idle. I imagine my current server probably draws at least that much at idle. So I have setup my server to go into standby mode after 30 minutes of no activity. Then it wakes up whenever a computer on the network tries to access it. I think the components you have picked would work just fine and also be quiet cost-effective. (and efficient) You could pick up 1-2GB of RAM and all together be just over $100. Not bad at all.

In my quest for better performance of my home network I have tested a few different OSes so if I can be of any more help let me know.

00Roush - Thanks, you've been a tremendous help.

My current server is running Ubuntu 8.10 32-bit. It needs to double as a "desktop" because it's connected to a TV in our living room, and is used for playback as well as network storage. (By the way, nVidia's VDPAU off-load is a godsend - watching 1080p video on my lowly single-core athlon uses only ~10% cpu). The rest of my house is a mix of WinXP 32-bit and Ubuntu 8.10.

I am familiar with the debian-based linux distros, and wish I could stay with them. But I have experienced long-term data corruption on NTFS, Ext2, Ext3, and XFS file systems. So I do not plan to use them on my next storage server. The nature of a home media server - storing lots of data for a long time with infrequent usage - mean silent corruption goes unnoticed for a long time. I had a music server for many years using NTFS and Ext3 file systems, experienced the inevitable corruption, and (since manually checking each file is not an option) had been backing up the corrupt files. That's the devilish nature of storage servers - unless you have something scrubbing your data for errors, you can't know if your files are sound until you use them. By then the damage is done.

As far as I know, the only two files systems that guard against data corruption are BTRFS (in linux kernel 2.6.29, but still in alpha status), and ZFS (built into OpenSolaris and Solaris). BTRFS is the linux world's response to ZFS. Both are self-healing and offer end-to-end check summing (among other nice features). FreeBSD has a cut-down version of ZFS. MacOS supports reading from ZFS volumes. And the linux varieties of ZFS via FUSE do not interest me - I want kernel-level integration.

So, OpenSolaris it is. So far, I can't stand the command line - it's so different from debian. But ZFS is just too cool. I've found Nexenta, which grafts the debian userspace (apt-get, etc.) on top of the solaris core & ZFS. I haven't had time to play with it yet, as recent nice weather = house/yard work.

The Solaris HCL is pretty restrictive - there's no where near the driver support we get with linux. So I've revised my shopping list again.

The new motherboard: Gigabyte GA-MA74GM-S2

The 740G/SB700 chipset is one of the lowest-powered chipsets available. It also offers 6 SATA ports (vs the previous board's 4). The extra 2 are huge. ZFS wants direct control over the drives (no hard/soft/fake/whatever RAID in between), so really all you need is a lot of SATA drives in a JBOD configuration. Starting with 6 on-bard means I won't have to buy a PCIe SATA card for a long time.

This motherboard is listed on the OpenSolaris HCL as "reported to work", with the main hurdles being the graphics (the standard radeon driver throws a fit, so the vesa driver is used as a workaround) and Realtek 8111C LAN (but there's a work-around for that too). If I cannot make the Realtek LAN work, I always have my Intel PRO/1000 PCI nic.

It's also discussed all over the place. Here's a review from SPCR (one of the lowest power-consuming boards they've tested). An Ubuntu server build log / gallery thread (also at SPCR). Thumbs up from the unRAID community, too.

I'm going with 4GB of RAM for 3 reasons: 1) ZFS loves RAM (it caches a lot of data), 2) with only 2 RAM slots I want to be as future-proofed as I can get, 3) low marginal cost over a 2GB kit. I know 4GB will draw more power than 2GB or 1GB, but I'm willing to deal with that for the reasons above.

A few of the components I want went on sale today (the motherboard & the CPU), so I'm going to get them on order. I wish I'd bought 2-3 more WD10EADS drives when Newegg had them on sale for $85 a few weeks ago. Those things rarely go on sale.

Parts on the way :)
Last edited:
Glad to help.

Interesting notes about the data corruption. When you say NTFS do you mean with linux or on a windows machine? Just wondering because I have never really heard of many problems with NTFS on windows. (or EXT3 on Linux for that matter) I know that NTFS on Linux/Unix usually is not the best idea as support has not been very good.

Sounds like your well on your way. Let us all know how everything works out when you get it setup.

Similar threads
Thread starter Title Forum Replies Date
Maverick009 Quad i225 2.5G NIC Switches, NICs and cabling 4

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!