What's new

How To Build a Really Fast NAS - Part 2: Shaking Down the Testbed

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Madwand

Regular Contributor
It must be pretty difficult to consider changing the testing setup when you have a ton of existing results representing several different vendors and time and effort spent.

However, I wish to say that I've never really been happy with the iozone results -- from the onset, where it produced some very pretty and impressive-looking 3D graphs, which were on closer examination pretty useless, to later on, when for supposedly networked remote machine tests, the results would sometimes show performance better than possible with the given network speed.

Strip out the material which is IMO misleading due to significant cache impact, and you're left with something which gives you very little to graph, and can be shown with several other tools, perhaps more quickly and reliably.

Well, enough ranting. I think that the steps you've taken to test the testbed are great, and I strongly encourage you to go farther. For one, as some posters have done here (e.g. Dennis Wood), you should IMO ground your tests with real world application performance. If the tests do not give you results which even roughly correspond to real file transfers, they're not useful, and you should change the tests. The "small file tests" IMO are in this category -- if you did real small file transfers, you'd never see performance exceeding the network bandwidth, and would very often see performance much less than that for large files.

Intel has recently provided a tool which is interesting. They're apparently captured a bunch of different application access patterns, bunched them together in a test suite, with an executor. This is really good in theory, but still needs to be explicitly correlated with real-world tests IMO. There are a couple of things that I don't like about the Intel test -- e.g. it's very insular about the platforms it supports, but if you happen to get past this, and also validate the test for a specific client against some meaningful real-world benchmarks, then perhaps the test may be useful.

I understand that testing is a multi-dimensionally complex endeavor, and that even something seemingly as simple as file transfers vary bizarrely in practice, but I feel that the oddities in the results represent motivation and opportunity for improvement.

[Article Link]
 
Interesting article on issues with speed testing networks. Your main issue appeared to be with hard-drive speed in your test machine, but you do not seem to have tried to improve it's performance using RAID-0 which should theoretically double the performance of a single hard-drive, especially with larger reads / writes. Two other possible solutions would be to use a dedicated RAID controller in your test PC, some of these can have extremely large cache sizes (up to 2Gb) and this should allow you to test the network transfer speeds without running into limitions of the hard-drive on the test PC. Perhaps the best solution would be one of the Gigabyte i-RAM drives, while these are limited to SATA150 speeds, they do run pretty close to this theoretical maximum and they are faster than the maximum possible on a single gigabit network connection, this would completely eliminate any limitation of your PC and get a "true" result for your NAS and network.

cheers,
mangyDOG
 
Drive speed in the iozone machine really is not an issue or the limit. iozone manages its disk accesses very carefully and operates from memory during the tests.
 
The FastNAS series is looking great, keep up the great work Tim!

One question: Might it be prudent to get a baseline drive performance test before testing I/O on the network (i.e. run HDTach). Just thought it might be a good place to start to know where the drive performance is sitting. But it sounds like you've already done some tests to establish some good baselines. Plus I see you've references Tom's charts.
 
Hi Scotty,

I thought of doing drive tests. But as long as I can get the data someplace else, I'm ok with that.

WD is going to be sending some Velociraptors. We'll see if they really help.
 
Great article!

This series could not have been more timely as i've recently started working towards a NAS and trying to decide between buying and building a box.

I have specifically been plagued by the fact that NAS performance data never seems to approach saturating a gigabit network so I couldn't believe me luck when i read Part 2.

Let me preface my comments by saying I am far from an expert on Raid arrays, NAS, or any other aspect of networking.

That being said, here's my issue. You mention how the upper limit on performance is due to hard drive read/write limitations. While this makes since for a single drive, It's always been my understanding that the whole purpose of stripping (such as in Raid 5) was to increase read/write performance by being able to read/write the data on multiple drives at once. While an individual drive may be limited to read speeds in the 70MB/s range, a raid 5 array should be able to accomplish read speeds well in excess of 125MB/s.

This seems to be supported by benchmarks posted at Maximumpc in Nov 2007 and again in April 2008, as well as this 2007 zdnet article..

If I'm completing missing the point of your explanation I would greatly appreciation any further elaboration you can give. If not, i'm still in the dark as to why a Raid 5 NAS would still have this glass ceiling.

Anyway, i will be anxiously awaiting the remainder of this series. I hope your considering the Adaptec 5405 as one of your raid controllers to include as it seems to have a great price/performance ratio.

Rodney
 
In theory, all multiple drive RAID formats should be able to bypass the speed limitations of a single drive. But in practice, things like RAID algorithms and processor speed get in the way.

I will look at RAID controllers later in the series. But first we'll see what can be done at a lower cost.
 
One thing we've noted is that the only reliable way of getting better numbers from the Intel SS4200 unit in testing (with XP as the OS) was to use RAID0 on the testing workstation. Also, writing to a RAID5, regardless of on the NAS or workstation is much slower than RAID0, and get's worse if you're writing multiple data streams.

Vista over gigabit seems to be more efficient using SMB2 as we're able to get higher numbers over the LAN with a single drive Vista SP1 laptop from the Intel NAS.

Our next test will reconfigure the Intel SS4200 from RAID5 to 0, and test a RAID0 Vista workstation on it. I believe this should break our current 45MB/s max measured transfer rates over gigabit ethernet.
 
Our next test will reconfigure the Intel SS4200 from RAID5 to 0, and test a RAID0 Vista workstation on it. I believe this should break our current 45MB/s max measured transfer rates over gigabit ethernet.
I didn't see a RAID 0 option for the SS4200 when I tested it. Are you not running the EMC OS?
 
Tim, I loaded the new firmware and there are only two options, RAID5 or RAID10. The process of rebuilding it was buggy as I had to reboot and re-request the change several times after removing all the shares/data from the box. That said, RAID 10 was faster giving max write (measured) at 51.3MB/s and a whopping 72.7MB/s read between the Intel NAS and a test workstation now updated from XP to Vista SP1. The workstation has a 3 drive RAID 0 array.

I'm convinced both NAS units support SMB2 as the numbers for the exact same test as above, but to a 3 drive RAID 0 workstation running XP SP3, instead of Vista SP1, are 34.6 MB/s write, and 39 MB/s read. Vista SP1 is testing about 80% faster. In other words the jump in performance using RAID10 over RAID5 is pretty small when testing XP (about 5 MB/s), but a much larger increase in performance would be attained by just using Vista SP1 as the client.

Once those tests were done we loaded up the newly arrived QNAP TS509 and managed a blistering 81.7 MB/s (write) and 95.2 MB/s (read) between the Vista RAID0 workstation and the Qnap configured with a 5 drive RAID 0 (5 x Seagate 1TB). Iozone reported 120 MB/s reading from the Qnap unit up until about 600MB file sizes! The ffmpeg encoding test came in at 32.5 MB/s and 65 MB/s. I haven't looked at all of the Iozone figures yet, but the ones above are actualy measured..not benchmarked. The numbers are very encouraging as we're likely maxing out the gigabit connection for the first time with fast disc systems at both ends...using Vista/SMB2.

In copying 18.3 GB of test files from the Qnap NAS to the test Vista SP1 workstation (to change the NAS from RAID0 to 5), file copy rates hovered between 93 and 97 MB/s.
 
Last edited:
http://www.nexenta.com/corp/
I would be very intrigued to hear any experiences related to using this.

The GNU/OSS + Solaris Kernel + Solaris User-space (ZFS, Dtrace etc) world has really taken off over the past 12mth+

Sun has been very clever in embracing similar dev. methodologies to the GNU/Linux and *BSD communities..

I think you'll find this to be a capable alternative to some of the GNU/Linux & *BSD based solutions suggested... It might be less mature in some respects, but give it time...

I'm also looking for a less "commercial" opensolaris distro with a similar focus/feature-set. I may have to "roll my own" or find someone that is..
 
Impressive results, Dennis. Maybe I don't have to continue the "Fast NAS" series!
 
Thanks for the suggestion, jalyst. Any idea what the pricing is?
 
Wow Dennis, those are the fastest NAS transfer speeds i've seen posted.

So would it be fare to assume that the ~60MB/s glass ceiling encountered in NAS testing is from single drive systems that the nas is transferring files to and from as opposed to a limitation of the NAS itself? This may have been what Tim was saying in Part 2 and i was just too dense to get it.
 
I'm confused as to why the NAS performance dropped off at 1-2GB file sizes. Is this because the file sizes are larger than the memory buffer on the QNAP? If so, wouldn't the 15 MB/sec throughput then be more reflective of actual NAS transfer speeds without the benefit of write caching?

Also, you mentioned that it appears as though drive transfer speeds are what limited throughput to about 66 MB/sec, and compared this to the benchmarks for the Hitachi unit you were using in the QNAP.

However, as I understand it (and I'm not an expert), isn't RAID5 striped across drives with parity? It's a bit complex to tease out, as parity calculations will slow down throughput (and confound analysis of actual drive transfer rates). Still, wouldn't RAID5 be expected to be faster than transfer rates to a single drive, due to the striping across multiple drives?

If so, I'm not sure that the 66MB/sec (which falls in the benchmark transfer rates for a single drive) can be interpreted as a limitation imposed by intrinsic drive speeds.
 
RK, based on a lot of testing, yes, the 60MB/s ceiling is being hit due to local hard drive speeds.

Tarrant, the QNAP has only been tested so far in single drive, and RAID 0 (5 drives) configuration. RAID5 tests are coming, but in all of our other tests, RAID5 suffers when it comes to writing..and suffers even more when writing more than 1 data stream.

The fastest NAS tested by Tim so far is the TS509. In our tests, the TS509 is faster than the Intel in every configuration tested so far. Here are the basic bottlenecks that we've observed (quite a few tests) and should be eliminated in the order provided if you want to get to the gigabit "wire speed limit":

1. You need a fast NAS as above, or home built variant.
2. Gigabit ethernet adapter should be on the PCIe bus, not PCI.
3. RAID 0 must be used on the workstation, and ideally on the NAS too.
4. Vista SP2 is at least 75% faster in all tests than XP SP3.
 
I'd just like to agree with mangydog here, I think that using a gigabyte i-ram setup is really the only way to go to truly test the NAS performance whilst being sure that there is no RAID overhead affecting the benchmark.

Is this something you can look into Tim?

Kevin
 
I'd just like to agree with mangydog here, I think that using a gigabyte i-ram setup is really the only way to go to truly test the NAS performance whilst being sure that there is no RAID overhead affecting the benchmark.
Hard drive performance of the machine running iozone is not a factor. The drive is not used during testing and the system is more than capable of running the gigabit link up to its limit.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top