What's new

QNAP TS-451 - RAID 10?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Jaybo

New Around Here
OK, I understand very little about RAID and the more I read the less I seem to know! Now it sounds like RAID 5 is a no go since it is more vulnerable. RAID 6 sounds like the safest but a huge hit on both write and rebuild speed compared to RAID 10. Since I would also backup the QNAP to a USB drive as well would that give me the most bang for my buck and security of my data?

I would be running 4 3TB NAS drives (HGST or possible WD Red) as well as an external USB backup drive. Does the USB Drive also have to be 3TB or can I size it to my storage load? Current storage load is about 1.5tb and I will likely grab more video since space will allow it.

Thoughts?

..Jay
 
RAID5 for sure. Less waste, and 1 drive is enough for safety.

Sent from my Nexus 4 using Tapatalk
 
My preference is RAID10 - much better write performance, and read performance there's no difference..

Recovery is easier with RAID10 than with RAID5, the risk is the same, but with RAID10, one is recovering half the array rather than the entire array.

Backup Drive should be a multiple of the entire collection on the RAID set - so if you have 4*4TB drives, and have 2TB on it, then a 4 to 5TB backup disk should be sufficient (and perhaps more than one)
 
My preference is RAID10 - much better write performance, and read performance there's no difference..

Recovery is easier with RAID10 than with RAID5, the risk is the same, but with RAID10, one is recovering half the array rather than the entire array.

Backup Drive should be a multiple of the entire collection on the RAID set - so if you have 4*4TB drives, and have 2TB on it, then a 4 to 5TB backup disk should be sufficient (and perhaps more than one)

Interesting. So for my 6-Bay go with RAID 10 over 5? I've always understood it that in the case where you have 3 to 4 drives, to go with RAID 5

Whereas for anything more than 4 drives, go with RAID 10. True?

To sum it up: (6x 4TB WD Reds = 24TB in my case)

RAID 5:
  • Would give me about 18.6TB of usable space.
  • More usable space, but at the cost of being doomed if there's a 2nd disk failure during a rebuild.

RAID 10:
  • Would give me about 11.7TB of usable space.
  • Less usable space, but with more redundancy and a chance to be safe even with 2 drive failures.

Even with the i3, i5 and even i7 units powered units today, because it would be able to rebuild faster than the units with the old cpu's, wouldn't that reduce the failure rate of a RAID 5 rebuild to some degree?

It's interesting to see that all the older articles I've read are all for RAID 10, however, the new articles are now suggesting RAID 5. Could this be due to the faster CPU's and it being able to complete the task much faster?
 
Last edited:
Interesting. So for my 6-Bay go with RAID 10 over 5? I've always understood it that in the case where you have more than 4 drives (as RAID 5 being the choice), to go RAID 10 for anything more than 4 drives. True?

RAID 5 with modern NAS devices is plenty fast, most people won't notice the difference to RAID 10. The recommendations against RAID 5 is fairly new and the main reason for it is the increasing size of drives which causes longer rebuild times and increases the risk that a second disk could fail before the rebuild is ready or a read error could corrupt the array.

An important thing to notice is that RAID is for continuity or performance rather then a form of a backup. With RAID you can continue using your NAS device while you are waiting for a replacement drive when one fails, without it you would have to restore from backups but in both cases you should have the backup (more then one drive can fail, raid arrays can corrupt, the NAS could die completely). Now if you do have a backup, then it's a question of how much some downtime will cost you and what the likelihood is, or how much is the extra 5% (or 15% in certain cases) of performance from RAID 10 worth to you vs. the fact that it uses up quite a bit more raw disk.

So basic risk management...
 
With RAID you can continue using your NAS device while you are waiting for a replacement drive when one fails

just keep in mind - RAID5 - if all the disks are around the same age - the MTBF is against you... most RAID5's totally die on rebuild...

Been there - and you don't want the t-shirts...
 
An important thing to notice is that RAID is for continuity or performance rather then a form of a backup. With RAID you can continue using your NAS device while you are waiting for a replacement drive when one fails, without it you would have to restore from backups but in both cases you should have the backup (more then one drive can fail, raid arrays can corrupt, the NAS could die completely). Now if you do have a backup, then it's a question of how much some downtime will cost you and what the likelihood is, or how much is the extra 5% (or 15% in certain cases) of performance from RAID 10 worth to you vs. the fact that it uses up quite a bit more raw disk.

Basically that's the risk of RAID5 - seriously... Friends that know friends that run servers/NAS boxen - RAID5/6 is pretty much in the not doing pile...
 
just keep in mind - RAID5 - if all the disks are around the same age - the MTBF is against you... most RAID5's totally die on rebuild...

Been there - and you don't want the t-shirts...

In that case with a 6-Bay TVS-671, go RAID 10 yes? (At the cost of disk space of course) I didn't know about the failure rate with RAID 5 during rebuild. I'm not willing to risk it given the odds.

There's a lot of mixed reviews regarding the whole RAID debate, but one thing's for sure....I do see more "Your better off with RAID 10" than "RAID 5," especially for 6-Bay NAS's...

Thanks sfx2000.
 
Last edited:
Even with the i3, i5 and even i7 units powered units today, because it would be able to rebuild faster than the units with the old cpu's, wouldn't that reduce the failure rate of a RAID 5 rebuild to some degree?

Bottleneck there is the disk I/O, not the CPU. Calculating parity data is very easy with a modern processor.
 
just keep in mind - RAID5 - if all the disks are around the same age - the MTBF is against you... most RAID5's totally die on rebuild...

I think claiming that "most" die is exaggerated. Over the years I've had to rebuild a RAID 5 array maybe 3 or 4 times, with zero additional failure so far. So while it might possibly be more likely, it's not THAT likely.
 
I think claiming that "most" die is exaggerated. Over the years I've had to rebuild a RAID 5 array maybe 3 or 4 times, with zero additional failure so far. So while it might possibly be more likely, it's not THAT likely.

Good to know. Out of curiosity, what unit did you rebuild the RAID 5 array in? What type of CPU? How big were the HD's? How long did it take?

Thanks!
 
Good to know. Out of curiosity, what unit did you rebuild the RAID 5 array in? What type of CPU? How big were the HD's? How long did it take?

Thanks!

Those were HP rackmount servers, with two Xeon, and a Compaq Smart Array 400 or 410p, and various numbers of 2.5" SAS 7.2K RPMs (depending on the server). Not the kind of hardware you have in a home NAS :)

I never kept track of the rebuild time, since those servers were in a datacenter. I'd only check the log a day or two later to ensure that everything went fine, and re-enable my monitoring script. The RAID volume wasn't that large however, maybe 2-4 TB at most, so it was probably much faster than an entry level NAS with 6-12 TB of SATA disks would take to rebuild.
 
Those were HP rackmount servers, with two Xeon, and a Compaq Smart Array 400 or 410p, and various numbers of 2.5" SAS 7.2K RPMs (depending on the server). Not the kind of hardware you have in a home NAS :)

I never kept track of the rebuild time, since those servers were in a datacenter. I'd only check the log a day or two later to ensure that everything went fine, and re-enable my monitoring script. The RAID volume wasn't that large however, maybe 2-4 TB at most, so it was probably much faster than an entry level NAS with 6-12 TB of SATA disks would take to rebuild.

Gotcha. Thanks for sharing.

Hats off to having the two Xeon :D

I think I'll stick with RAID 5 myself. 6x 4TB WD Red's alone is $1,000+, which I have now.

If I wanted to go the RAID 10 route, I wouldn't personally do it with my set up of having just (6x) 4TB drives unless I had (6x) 6TB WD Red's. Then at least I'd have ~16.7TB of usable space out of 36TB's with a RAID 10.

Whereas with RAID 5 with the 6x 4TB HD's I'm able to have ~18.6TB of available space all the while still having the ability to have a little redundancy with a hard drive failure. Second disk failure possible? Sure...but I'll take my chances and rely on the insurance of my manual external HD back ups along with my Cloud back up.
 
just keep in mind - RAID5 - if all the disks are around the same age - the MTBF is against you... most RAID5's totally die on rebuild...

Been there - and you don't want the t-shirts...

I don't think it's really that simple, and having worked with thousands of different servers and tens of storage setups (though not really RAID 5 in the last 10 years and mainly enterprise stuff) I have seen very few issues with RAID 5.

Firstly, I've seen a lot of RAID 5 rebuilds with sun servers, and not one case where it failed... Sure the likely hood is bigger now with large consumer drives, but saying most fail is totally wrong.

Generally the problem is also not really MTBF (at least from a statistical point of view) since even fairly unreliable drives should be around 5% a year failure rate and rebuilding the array will take less then 2 days (likelihood of second failure could be counted from that). This is not so simple either, because as you are mentioning with age (though I would expect it's more about same batch of manufacturing -> transporting etc) it does seem like drives from the same batch die more frequently.

However the biggest risk is actually the amount of "non-recoverable read errors per read bit". All drives can have mechanical read errors, and when drives get bigger and bigger but the amount of read errors per read bit does not then the chance of encountering one during a raid rebuild is increasing all the time. And this is something that is the most likely thing to cause a raid 5 rebuild to fail. During the rebuild it will read pretty much all the bits on all remaining drives to get new parity, and that's a lot of bits on something like modern 6TB drives...
 
I don't think it's really that simple, and having worked with thousands of different servers and tens of storage setups (though not really RAID 5 in the last 10 years and mainly enterprise stuff) I have seen very few issues with RAID 5.

Firstly, I've seen a lot of RAID 5 rebuilds with sun servers, and not one case where it failed... Sure the likely hood is bigger now with large consumer drives, but saying most fail is totally wrong.

Generally the problem is also not really MTBF (at least from a statistical point of view) since even fairly unreliable drives should be around 5% a year failure rate and rebuilding the array will take less then 2 days (likelihood of second failure could be counted from that). This is not so simple either, because as you are mentioning with age (though I would expect it's more about same batch of manufacturing -> transporting etc) it does seem like drives from the same batch die more frequently.

However the biggest risk is actually the amount of "non-recoverable read errors per read bit". All drives can have mechanical read errors, and when drives get bigger and bigger but the amount of read errors per read bit does not then the chance of encountering one during a raid rebuild is increasing all the time. And this is something that is the most likely thing to cause a raid 5 rebuild to fail. During the rebuild it will read pretty much all the bits on all remaining drives to get new parity, and that's a lot of bits on something like modern 6TB drives...

Interesting....and noted. Thanks.

Is it safe to say that a small drive, Ex. 4TB vs 6TB would be "overall," safer due to the obvious fact that it has less real estate to contain read errors per read bit? - This is not to say that any smaller sized drive is better than a larger drive. What I'm referring to is specifically from a RAID rebuild stand point.

Before ordering my 4TB WD Red's I was back and forth between getting 6x 4TB or 6TB Drives. Money wasn't an issue. For some reason I just went with the 4TB drive over the 6TB. Well okay, from a practical point of view as a consumer and the space that I really needed, I choose the 4TB because...
  • 4TB was cheaper at ~$150 vs 6TB for $250.
  • $100 difference x6 HD's, which saved me $600 going with the 4TB.
  • With that $600 I used it toward the (i7) and 16GB RAM upgrade and still had more than half left over.
  • The WD Red's are under warranty anyway so that's pretty reassuring. Call them up, they send you a drive, you send yours back in the same box. It's easy. I've done it before, unfortunately, but it's inevitable.
  • In the event of a RAID 5 rebuild, the 4TB would finish sooner over the 6TB. Obvious, but worth noting.
  • This is where the wind just swayed me to go for the 4TB's, for that particular reason of less can be more. In this case during a rebuild you'll need to hold your breath a lot longer with a 6TB vs 4TB. Again, obvious I know.
So if any of the points (RAID rebuild related) above holds to be true, then I guess it's something worth noting yes? At any rate, with the $300 I had left over (there really wasn't a budget, but you get the drift) I went ahead and ordered two more 4TB's just to have on hand for when I hit the iceberg that is ahead of me in the unforeseen future. That way I don't have to wait for days for the new drives, rather I can start rebuilding right away. Sure the same failed drive goes back to WD, but I'll always have two fresh ones always on tap. Leap frog.
 
Last edited:
I don't think it's really that simple, and having worked with thousands of different servers and tens of storage setups (though not really RAID 5 in the last 10 years and mainly enterprise stuff) I have seen very few issues with RAID 5.

Firstly, I've seen a lot of RAID 5 rebuilds with sun servers, and not one case where it failed... Sure the likely hood is bigger now with large consumer drives, but saying most fail is totally wrong.

Well, we don't use RAID5/6 in our production environment - I do have RAID1 and RAID0 systems, along with RAID10's in some boxes

Something to consider... in a Four Disk RAID10, I can potentially lose two drives, and still recover, with no performance impact - in a RAID5 in that same environment - lose two drives, and you're toast...

And on recovery - RAID10 is no impact to rebuild the array on a single drive loss - on RAID5 there is a significant penalty to recover/rebuild the disk that was replaced... whether controller based or CPU based (e.g. MDADM, which most consumer NAS linux based boxes run).

So, this is why I cannot recommend RAID5 at any cost - JBOD is safer than RAID5, and I would posit that even RAID0 is safer, because there one absolutely knows the risk, and acts accordingly - with RAID5 folks get complacent...

I suppose its like the old adage - two kinds of people in the world - those who back up, and those who haven't lost a hard disk... same would apply - Folks that run RAID5, and those who haven't seen what happens when things go wrong...
 
and beware a file system using RAID.
Corrupted file system/malware in it. NAS mainboard fault can corrupt the file system. Power supply too.

RAID is no help in these.

RAID is not a backup.

IMO: drive failure is less likely than the above, in a NAS with fewer non-RAID drives. This in the forum context of NASes for SOHO/home.
 
It would be silly for anyone to use even a RAID 1 as a primary back up. That's why one should always have a back up of their content that's easily accessible (local USB drives etc...) and an off site back up such as a cloud or to another NAS located elsewhere.

Even with a RAID 10 or any type of RAID configuration, organization of the content is key for it to be easily managed.

Speaking of which, what do you think of this method:


Computer ->
NAS (w/ 4TB to 5TB volume increments) to be able to easily ->
Back up to local USB external HD [Back Up #1] ->
Back up the USB external HD and/or the NAS volumes themselves located on the NAS to the Cloud. [Back Up #2]

I use this method, except that I go a step further in backing up, "Back up #2" to my NAS located at my friends place. In return I have his NAS here that he backs up to. This is Back Up #3/2nd off site back up.

Just out of curiosity, what type of volume configuration do you all use on your NAS along with the type and for what application of use? Ex. Static Single Volume, Thick Multiple Volume or Thin Multiple Volume?
 
Similar threads
Thread starter Title Forum Replies Date
nospamever Seeking advice on RAID setup General NAS Discussion 13

Similar threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top