What's new

How To Build a Really Fast NAS - Part 4: Ubuntu Server

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Just use file sizes well beyond your workstation or server RAM sizes and the cache effects will disappear. For large file tests we use 200 to 400GB file sizes. No test (over about 2 months now) has resulted in better than 68MB/s large file reads from a single drive over gigabit. That said, 100MB/s is not terribly difficult with RAID0 and Vista SP1 at both ends. The QNAP TS509 in RAID 0 mode with 5 drives will nearly saturate a gigabit connection for both reads and writes..again assuming you're using RAID 0 and Vista SP1 on the workstations. If you just copy 2 to 4 GB files at a time, and average the results, you'll see large inaccuracies as the first 1GB is coping over at very high speeds until the RAM cache is exhausted. Also, Iozone is not handling load balanced connections which may be due to my use of the program...it under reports substantially over real world measured results.

Vista reports large file reads (RAID5) from the same NAS at 97 to 100MB/s with network load balancing enabled on an LACL capable switch, and RAID0 on the workstations. So really, the only challenges at this market level are getting RAID5 writes over gigabit approaching 100MB/s, and doing this potentially with several workstations at the same time.

We've got an Adaptec PCIe RAID card inbound (about $300) to see how well that works for RAID 5 writes over the LAN, and certainly Ubuntu (and the rest of the Linux variants), Vista, Server 2008 etc will have no problem handling the higher data rates.

Just to clarify the 4 files I mentioned in my previous post were copied at one time.

Were your 200GB to 400GB files a single file or a set of files together?

I will see what happens with a 25GB file and report back.

For Iozone you might try testing with larger request sizes on Windows Vista and see what happens. I have found that larger request sizes show higher read speed but much lower write speeds. I believe this has to do Vista using larger I/O sizes for file copies as described here... http://blogs.technet.com/markrussinovich/archive/2008/02/04/2826167.aspx Just a guess though.

00Roush
 
Is there anyway you could run a Iozone on your server's local drive?
I have done that and with a Velociraptor. As Dennis says, once you exceed RAM size, the spindle speed of a single drive, even the Velociraptor, limits throughput to ~70 MB/s.
 
We've got an Adaptec PCIe RAID card inbound (about $300) to see how well that works for RAID 5 writes over the LAN, and certainly Ubuntu (and the rest of the Linux variants), Vista, Server 2008 etc will have no problem handling the higher data rates.
What model card and what are you using for a motherboard?

Also, have you looked at RAID 0 with just two drives? I have yet to see any difference with RAID 0 and two drives. Perhaps there aren't enough spindles to interleave reads quickly enough with just two.
 
Tim, the Adaptec card is this one: http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-3405/ It's relatively inexpensive and at least one review (but just of RAID 0) provided adequate numbers for it. The RocketRaid cards looked good performance-wise but in searching a few forums, it looked like there are lots of driver issues, particularly with the Linux variants. If we decide to build an external array, we'll use this one: http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-4805/

The board it will be tested on is the Asus P5N-32 SLI. This is a dual NIC board with driver support for link aggregation. If I was building up a box, the motherboard used would be the Asus P5Q Premium, specifically because it has 4 gigabit LAN ports onboard supporting LACL.

All of our tests so far have been with 3 drive RAID 0...haven't tried 2 as I wanted read/write speeds to be at a sustained 140 to 150MB/s on these workstations. Looking over our results matrix, I'm seeing performance from our 4.3GB file set copy varying from ~85 to 95MB/s between the RAID 0 workstations with full services enables..including AV. With the TS509 in RAID 5, load balancing mode, very large files are being read at ~100MB/s but only to the RAID 0 (3 drives) Vista SP1 workstations. It will sustain this rate all day providing only one workstation is reading from the NAS unit.

The fact that our load balancing tests on this NAS are providing reads of about 60MB/s x 2, would suggest that the TS509 drive array is capable of about 120MB/s total.
 
Last edited:
Interesting that it's a PCIeX4 card. Highpoint's hardware cards are x8, as are 3ware/AMCC's.

Thanks for the mobo info. Cost for this stuff is starting to add up, eh?
 
I have done that and with a Velociraptor. As Dennis says, once you exceed RAM size, the spindle speed of a single drive, even the Velociraptor, limits throughput to ~70 MB/s.
So this means that, if you want to copy a lot of DVD ISO's (usually about 5 GB/file), you would need to use a 64 bit OS to be able to use more than 4 GB of RAM in order not to experience this limited throughput?
 
Indeed, interesting Adaptec card, and indeed nice it only needs PCI x4. I was going to use the Adaptec 5405 myself. Ok, so this card is PCI x8, but that's no problem since my motherboard has got more than one x16 slot on it.

What I do find strange is that I can find this 5405 card cheaper than that 3505 card, even though the 3405 is midrange and the other one is higher end. (it's only 20 euro cheaper also, but still...) I'll have a closer look at the differences between the cards...
 
So this means that, if you want to copy a lot of DVD ISO's (usually about 5 GB/file), you would need to use a 64 bit OS to be able to use more than 4 GB of RAM in order not to experience this limited throughput?

Assuming you are going to use only one drive. But what about 2 Velociraptors in a RAID-0?
 
Assuming you are going to use only one drive. But what about 2 Velociraptors in a RAID-0?
You'd think that in that case the Gigabit ethernet interface would be the limiting factor. But even then, I don't think it would make a lot of difference, since there also seems to be a limiting factor determined by the cache size versus the size of the files you're copying.
 
It would depend on how RAID 0 was done, I think. If software RAID, running out of memory will probably hurt performance.

At some point you run into a limit. All depends on how much money you want to throw at the problem!
 
I have done that and with a Velociraptor. As Dennis says, once you exceed RAM size, the spindle speed of a single drive, even the Velociraptor, limits throughput to ~70 MB/s.

Are you referring to the tests you did between two computers? I was wondering about having the Velociraptor hard drive be tested locally in the computer you had the Iozone program running.

So in my case I tested my local drive E, which is a single 320GB WD Caviar SE16, using this command line:
iozone -Rab c:\iozone\results.xls -i 0 -i 1 -+u -f e:\sharing\test\test.txt -q 64k -y 64k -n 256M -g 8G -z

Here is a copy of my results.
the top row is records sizes, the left column is file sizes

Writer Report
64
262144 811738
524288 791374
1048576 191818
2097152 130834
4194304 115021
8388608 107853

Re-writer Report
64
262144 908350
524288 898201
1048576 183911
2097152 131384
4194304 116158
8388608 111183

Reader Report
64
262144 998911
524288 978317
1048576 961273
2097152 959634
4194304 109822
8388608 109801

Re-reader Report
64
262144 988318
524288 1013267
1048576 974707
2097152 994155
4194304 109767
8388608 109794

This Iozone run was done on a computer running Windows Vista SP1 with 4GB of RAM. My results from this test show that after the RAM size is exceeded both read and write speeds are at ~100 MB/sec for this particular disk.

Dennis mentioned testing with file sizes much larger than both the client's and server's RAM so I put together a single file that was 26.4 GB in size to test with. Writing this 26.4 GB file from my Windows Vista SP1 (4GB RAM) client to my server running Ubuntu Server (1GB RAM) came in at 98.3 MB/sec according to what Vista was reporting. To ensure the client cache didn't affect the read speed from the server I restarted the Vista computer before coping the file the other way. Reading the file from the Ubuntu server to the Vista client came in at 86.5 MB/sec according to the copy window statistics. The drive being written to and read from on the Vista client was a single 320GB WD Caviar SE16. The drive being written to and read from on the Ubuntu server was a single 320GB WD Caviar SE16.

Next up I will see if I can saturate the gigabit network using a two drive RAID 0 array on each end.

00Roush
 
I ran the same command you did on my 300gb VRaptor drive which is my C drive with XP Pro SP3 Intel Q6600 @ 3.4GHz +4GB Ram.

I was happy with the results, It was very fast even as my C drive running windows while doing the writes & reads.
The Vraptor keeps the speeds fast even with large files
Running out of ram at 4gb made an larger than normal impact on some tests due to the fact the windows swap file is also on the C drive and as the ram is over flowing the OS packs it into the swap file to buffer, and causes delay on the read/writes to the test file of iozone.

I should re-run this test in some other ways with my other raptor Vdrives I have in my other machines. I'll try to set them up seperately in a raid 0 to test with sharing over my network I'm looking to buy/build a +100mb/sec NAS as well.


iozone -Rab d:\results.xls -i 0 -i 1 -+u -f c:\test.txt -q 64k -y 64k -n 256M -g 8G -z

300GB VRaptor C: XP Pro SP3 | Single 320GB WD Caviar SE16 from above

The top row is records sizes, the left column is file sizes

Writer Report Writer Report
262,144 | 1,248,715 | 811,738
524,288 | 1,197,132 | 791,374
1,048,576 | 175,725 | 191,818
2,097,152 | 123,675 | 130,834
4,194,304 | 101,071 | 115,021
8,388,608 | 96,912 | 107,853
Re-writer Report Re-writer Report
64 64
262,144 | 1,424,735 | 908,350
524,288 | 1,378,035 898,201
1,048,576 | 186,326 | 183,911
2,097,152 | 121,31 | 131,384
4,194,304 | 106,612 | 116,158
8,388,608 | 98,25 | 111,183
Reader Report Reader Report
64 64
262,144 | 1,777,839 | 998,911
524,288 | 1,866,234 | 978,317
1,048,576 | 1,846,990 | 961,273
2,097,152 | 1,822,931 | 959,634
4,194,304 | 95,057 | 109,822
8,388,608 | 93,371 | 109,801
Re-reader Report Re-reader Report
64 64
262,144 | 1,801,134 | 988,318
524,288 | 1,857,744 | 1,013,267
1,048,576 | 1,721,439 | 974,707
2,097,152 | 1,795,140 | 994,155
4,194,304 | 93,028 | 109,767
8,388,608 | 92,775 | 109,794

Writer CPU utilization report (Zero values should be ignored)
64
262144 | 9.922886848
524288 | 7.05589056
1048576 | 7.57946825
2097152 | 8.204320908
4194304 | 8.971477509
8388608 | 9.313425064
Re-writer CPU utilization report (Zero values should be ignored)
64
262144 | 6.352772713
524288 | 8.925163269
1048576 | 7.623709679
2097152 | 5.974888325
4194304 | 7.361453056
8388608 | 7.964013577
Reader CPU utilization report (Zero values should be ignored)
64
262144 | 84.67961884
524288 | 100
1048576 | 88.04728699
2097152 | 84.1309967
4194304 | 6.977123737
8388608 | 6.469777584
Re-reader CPU utilization report (Zero values should be ignored)
64
262144 | 74.81078339
524288 | 93.84738159
1048576 | 82.06349945
2097152 | 95.00187683
4194304 | 6.27195549
8388608 | 7.396017075
 
Last edited:
Here are the results of the Hitachi 1TB drive I keep as my D to my XP Pro SP3 PC.

It was much slower on the largest files and had less CPU usage then my VRaptor did buthte Vraptor was running the OS while benchmarking.

Both drives start out fast with smaller file sizes over 100mb/sec.
The Hitachi is rated close to 70mb/sec and the Vraptor is rates at about 150mb/sec. I have seen other benchmark programs show consistent results to the ones here.

The Vraptor sustained 100mb/sec when the Hitachi dropped to 50-70mb/sec.

iozone -Rab c:\results.xls -i 0 -i 1 -+u -f d:\test.txt -q 64k -y 64k -n 256M -g 8G -z

The top row is records sizes, the left column is file sizes
Writer Report
64
262,144 1,121,933
524,288 1,175,154
1,048,576 83,357
2,097,152 58,195
4,194,304 50,817
8,388,608 48,400
Re-writer Report
64
262,144 1,343,390
524,288 1,359,304
1,048,576 99,264
2,097,152 68,997
4,194,304 60,190
8,388,608 59,841
Reader Report
64
262,144 1,747,881
524,288 1,697,196
1,048,576 1,755,227
2,097,152 1,759,823
4,194,304 63,212
8,388,608 62,755
Re-reader Report
64
262,144 1,773,592
524,288 1,739,655
1,048,576 1,576,448
2,097,152 1,764,554
4,194,304 63,494
8,388,608 63,047

Writer CPU utilization report (Zero values should be ignored)
64
262,144 4
524,288 5
1,048,576 4
2,097,152 4
4,194,304 4
8,388,608 5
Re-writer CPU utilization report (Zero values should be ignored)
64
262,144 5
524,288 4
1,048,576 4
2,097,152 3
4,194,304 4
8,388,608 5
Reader CPU utilization report (Zero values should be ignored)
64
262,144 45
524,288 86
1,048,576 99
2,097,152 96
4,194,304 5
8,388,608 5
Re-reader CPU utilization report (Zero values should be ignored)
64
262,144 95
524,288 93
1,048,576 96
2,097,152 87
4,194,304 4
8,388,608 5
 
Last edited:
Here are the results of the Hitachi 1TB drive I keep as my D to my XP Pro SP3 PC.
Sorry, I'm confused on the configuration.
Is this a local test (not over the network) and where is the VRaptor?
 
Running out of ram at 4gb made an larger than normal impact on some tests due to the fact the windows swap file is also on the C drive and as the ram is over flowing the OS packs it into the swap file to buffer, and causes delay on the read/writes to the test file of iozone.

You are probably correct about the swap file accesses slowing things down a bit.

Something else I wanted to point out is that you will probably not see peak performance if the hard drive is say half full. This is due the fact that the beginning of the drive is always the fastest and usually gets filled first. If you have seen benchmark graphs from programs like HD Tach you can see that throughput is not linear across the whole disk. (Except for SDD hard drives) Basically read/write speed will get lower as you move away from the beginning of the drive.

00Roush
 
The swap file issue is a good point. There is a rather dramatic dip at the 1GB file mark in our tests, which is likely where the 2GB RAM is full up on the workstation. I'll have to run a test or two on the workstations where no swapfile is used.

What is Ubuntu's upper RAM limit..assuming 32bit?
 
Jalyst and corndog,
I'm not real hot on looking at Nexenta anytime soon, since my trusted advisor on all things network filesystemed, Don Capps, didn't have very nice things to say about it.

I'm disappointed to see you pass on one of the most significant developments in filesystems in recent years. Don Capps is a smart guy. However, he works for NetApp. Consider this when taking his advice on ZFS, since one of NetApp's largest competitors (Sun) developed ZFS. If ZFS was crap, NetApp wouldn't have sued Sun over it. ZFS is a threat to NetApp's business because Sun was giving it away.

Don't let a Sun competitor talk you out of giving ZFS a fair test.
 
Last edited:
I'm disappointed to see you pass on one of the most significant developments in filesystems in recent years. Don Capps is a smart guy. However, he works for NetApp. Consider this when taking his advice on ZFS, since one of NetApp's largest competitors (Sun) developed ZFS. If ZFS was crap, NetApp wouldn't have sued Sun over it. ZFS is a threat to NetApp's business because Sun was giving it away.

Don't let a Sun competitor talk you out of giving ZFS a fair test.
Fair point. There are others who have taken up the challenge of checking out ZFS and posted their results. Here's one thread.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top