What's new

Best drives for NAS Fileserver - throughput or high I/O?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

dkazaz

Occasional Visitor
Planning the disks to be used in a new 5 bay NAS and I have narrowed it down to 2 very similar drives - the main difference (and its a small one) is that one is more optimized for throughput i.e. sequential reads/writes but has slightly lower access times and i/o performance while the other is better at i/o, slightly better access times but slightly lower throughput numbers.

In my case its likely to come down to price, but it got me thinking - which is better for a file server NAS (which doesn't get that many multiple hits)? My thinking was throughput but does i/o matter too?
 
Which NAS and did you consult with the manufacturers compatibility list? I highly recommend you select from disks that have been tested and known to work by the manufacturer.
 
Planning the disks to be used in a new 5 bay NAS and I have narrowed it down to 2 very similar drives - the main difference (and its a small one) is that one is more optimized for throughput i.e. sequential reads/writes but has slightly lower access times and i/o performance while the other is better at i/o, slightly better access times but slightly lower throughput numbers.

In my case its likely to come down to price, but it got me thinking - which is better for a file server NAS (which doesn't get that many multiple hits)? My thinking was throughput but does i/o matter too?

I/O performance really should only matter if your NAS will be serving multiple clients at the same time. It can also help if you are transferring a lot of small files. I would pick the drive with the higher throughput.

00Roush
 
Are you planning to use the disks as JBOD or in some type of RAID? I recently built my own home grown NAS with 5 SAMSUNG EcoGreen F2 1.5TB drives when they were on sale at newegg for $90/each. I'm running them in RAID 5 using a Dell Perc 6i controller and they perform very well even though they are 5400 RPM. I can post some benchmark results if interested. I'm able to get over 100MB/sec transfer (writes) with jumbo frames through my GigE network on large monolithic files and 110MB/sec+ on reads especially when the file is cached on the NAS.

I agree with claykin that if you're buying a specific hardware NAS unit to check with their hardware compatibility list to make sure there are no caveats with certain drives, sizes, etc. If you plan to use your drives in a RAID array, you'll want to be aware of issues with TTLER/CCTL. Some drives work better than others with this and some can be configured where as others cannot.
 
@All

I feel dumb for not putting all this info you pointed out in my original post!:eek:

The disks are for a BYOD NAS (synology DS1010+), most likely in RAID 5, maybe RAID 6 array. Not decided yet as I'm not very familiar with RAID 6 and its performance.

All disks are from the Hardware Compatibility List which is impressively extensive. However some compatible drives have a fair share of troubles such high LCC for WD green drives. :(

I opted for "safe" choices and narrowed it down to Samsung, either the 5400rpm 1.5TB model (high throughput) or the 7200rpm 1TB model (high I/O, but lower capacity).

@handruin: I'm almost settled on getting the same drives as you! I'd be really interested in your benchmarks if you can post them. Thanks a lot!:)
 
dkazaz, I have benchmarks for RAID 5 with a stripe of 512K on a Dell Perc 6i RAID controller that I'll post shortly (I won't be home tonight and their stored on my machine). Did you want to see individual drive benchmarks for the 1TB and 1.5TB drives?

How many drives are you planning to use in your RAID array? That unit looks to hold 5 so I think you've be fine with RAID 5 even with 2TB drives. The key difference where I would recommend RAID 6 over RAID 5 if you can tolerate the extra loss in storage and plan to have 6-8 drives in your array. There is a key factor when considering each drive because they come with a value of read error rates normally 10^14 or 10^15 which you can find on each manufacturer's website. There is a mathematical way to calculate if your array (depending on how large) may encounter an error. If you happen to have a failed drive in a RAID 5 and during the parity rebuild you encounter the error, you'd have data loss or worse. With RAID 6, you'll get the second parity drive to cover your butt. The Samsung 1TB and 1.5TB drives both have 10^15 for the error rate so it's actually kind of a nice bonus. There is an interesting article over on ZD about Why RAID 6 stops working in 2019 that I found informative and also RAID's Days may be numbered is also interesting to read.

From what I've seen with the Dell Perc 6i and RAID 6, the performance is still rather good. I'll dig up some benchmarks that were posted online by other people. I decided to stay with RAID 5 because I only planned on 5 drives in my array. I have two Perc 6i cards for my NAS (8 SATA ports each) and will eventually be spinning 15 drives (likely 3 arrays of RAID 5), but that's for future expansion.
 
Hi, those articles look fascinating, I'll be spending some time studying those!

I'm planning to use either 4 or 5 drives in the NAS, if I go for 5, I have the option of either RAID 5+hot spare or RAID 6. The NAS is expandable to 10 drives, so based on what you said, RAID 6 is an attractive option. Of course what I may do is do RAID 5 now and as I expand the array move to RAID 6.

The particular model is based on an intel architecture and I'm fairly sure it implements software RAID though the manufacturer does not clearly state it.

The softraid will of course not be on par with your controller but the particular NAS is said to be the most high performing one there is at the moment, so I think your benchmarks will still be useful.

As for the benchmarks, I would love to see both though I'm mainly interested in the F2 1.5TB - the price is at the sweet spot for those!

Thank you very much again!
 
I spent some time tonight running some benchmarks on the drives. I had planned to post the data and images in the forum, but the permissions to include image links isn't allowed (or maybe allowed for me because I'm too new here). I tried doing it with the manage attachments but that didn't flow very well. I hope it isn't too taboo that I'm going to link to my cheesy blog for this, but I put all the benchmarks for both the Samsung F3 1TB and the Samsung F2 1.5TB drives into a post.

If you want to read about it, the data is all here in my blog post for both drives:


The one set I left out was the benchmarks for my RAID 5 on the Perc 6i. I did those in a second blog post here if you want to check them out.
 
Thanks for posting your benchmarks handruin! :)

These are very good results for the 5400rpm drives - of course the 7200rpm drives is even better (in fact it looks to be the top performer in its class). If only they made it in 1.5TB...

Your RAID results confuse the heck out of me :eek: I assume these numbers are the results of caching on board the controller's memory?

Otherwise the Dell 6i would kill the SSD market :D
 
You're right, I was surprised how much quicker the 7200 RPM drive was compared to the 1.5TB once I sat down and ran benchmarks. I really like those 1TB F3 drives. They are cool, quiet, and fast. I have two of them in my workstation and one extra floating around as a spare/test drive which I bought on impulse!

Yes, the Dell Perc 6i comes with 256MB of built in cache with a battery backup unit in case of power failures. I've enabled the write-back function of the card for my RAID 5 array so that when writes are made to the array, the controller immediately stores it in the cache and tells the OS to continue I/O without having to wait for the physical drive(s) to write it on the platter.

In addition to the cache, you're also seeing the benefits of the data stripe that is written across the 5 drives. In my case it is a 1MB stripe. The Perc's on board CPU takes care of the RAID 5 parity calculations as fast as it can where as a software RAID 5 would consume normal CPU cycles for this. That's why the CPU overhead will be lower with a Perc compared to software RAID 5. I also noticed a huge difference in the intial background initialization of the array. When using software RAID 5 (before I bought the Perc) it took over 30 hours to initialize using software RAID 5 on that same AMD system which was outlined in my benchmarks. With the Perc it takes a little over 5 hours.

The Perc isn't going to kill the SSD drives because the SSDs have a killer advantage with the seek times. Many of the good SSDs are less than 1 ms seek where as this RAID 5 can range from 14-18ms. For my NAS purposes this is perfectly fine. I'll have at most 1-3 clients using it at any time so random IO isn't a worry. For high demanding IO, an SSD drive (or array of them) will likely out perform on the random reads and writes in a huge way.

Maybe I should post my NAS project in the DYI section to go over my experiences to share for others if they're interested. My plan was to build a NAS that could expand to 15 drives over time and support iSCSI. I would love to have 2 TB 7200 RPM drives and higher over the next few years as the prices are affordable and performance is good. One of my biggest logistical problems is the amount of ports I could get at a reasonable price. The Perc cards offer 8 SATA ports (through the SAS to SATA SFF 8484 cable) each and I paid $150 each for them. Since they aren't just ports and an actual RAID card, it was a great value IMHO. There are a bunch of little hurdles that have to worked through to get them working, but they were part of the fun in learning.
 
That sounds fscinating! I'd love to read about your NAS project, I think one of these days I'll get around to building one too!

Let me know if you post any more info.


A friend also has the Dell PERC on his home server and it is performing amazingly well from what he tells me. At the money it can be found for (between $150-300) it's an excellent deal as long as you're willing to build your own server.
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top