What's new

RAID 5 vs RAID 6 ??

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

BTW - that's a pretty nice NAS - keep in mind that it's not a backup, but a working store - I do suggest keeping content and archival footage on other media for backup purposes.

Did you just hit reply without bothering to read the post? "All of the footage is and will be backed up multiple times to other archival hard drives."
 
Nobody ever accused RAID10 of being cheap - that's not the point of it.

The different RAID options give you a choice between speed, size, cost and availability. The question you need to answer is how long can your array be down for before its a problem - and given the backups are on lots of different disks, how much of your own time do you have to dedicate to the recovery (rather than just leaving it to sync for over a week before you can use it which is what it will probably be).

I'm not asking about doing a completely different setup than the 8-drive ~100TB I have. I'm asking about RAID 5 vs 6 considerations for video editing, and rebuild times. There's no "mission critical" reason why the RAID can't be down a week if something happens and it needs to rebuild.

The risk of RAID5 and RAID6 during recovery is on the fact during recovery you do a full read of every single disk. The first couple of recoveries will probably be OK but after that it gets dicey as the remaining disks all have significantly more wear than the others - and there's no going back or undoing that damage. If you're doing something that's read intensive in the first place that makes this worse.

The first couple recoveries!?? How many times do you expect a drive to fail? Good grief... odds are with regular checking and scanning of the disks and replacing any that have an issue, it would be years before any surprises happened. Multiple drive failures would presumably take so many years to happen I'd be onto a completely different system by that point in the future. This project isn't going to last forever, and I do not expect to be using the same NAS hardware a decade from now.

Where RAID10 really comes in to its own is a large array with more disks - RAID5/6 can be lethal in that environment but RAID10 limits the risk and doesn't put excessive unnecessary wear on the whole array.

Personally I've gone back and forth in my mind on the benefits of RAID5/6 vs RAID10 on a home system but it all comes down to an individuals requirements - is it an annoyance if you lose it all but you shrug your shoulders and carry on or is it project over if the array is down for a week or 2 to recover etc.

Why would anyone "lose it all" when multiple archival backups are in place? The system can always be rebuilt from scratch, it would just take time... the question I keep asking here is HOW MUCH TIME? If it takes a week to recover RAID5 or RAID6 with (8) 14TB drives, that's fine. I'm not running NASA or an emergency room.
 
With such large capacity drives, I would avoid RAID 5. And due to the large number of drive (meaning higher risk of failure on a rebuild), I would consider RAID 6 as a minimum, provided you do have a reliable backup in case of additional failures during a rebuild.

Ideally, I would have considered getting higher capacity drives and going RAID 10 (to improve performance and also better protection in case of failures).

If that NAS supports it, consider adding an SSD cache. If an editor is doing a lot of scrubbing in a project, having that data in an SSD cache can greatly help performance (I assume you have at least 2.5 Gbps, or even 10 Gbps Ethernet). Go for a quality SSD that has a higher endurance rather than cheaper/higher performance ones.

I've seen tests that show, perhaps counterintuitively, that an SSD cashe does actually not help in this use case. I'm not nerdy enough to explain why, but you can find them on YT.

Regarding rebuild times - my own QNAP has three 10 TB in RAID 5. I run a RAID scrub once per month, that scrub (with priority sets to either low or medium, I forgot) takes about 15-18 hours. So with 14 TB, it would be reasonable to assume somewhere around 24 hours of rebuild/scrub time.

Thank-you for actually stating a number!! 24 hours is nothing. People have made it sound like a rebuild would take months or something...

That scrub, in addition to ensuring data integrity, also helps having early detection of disk failures, reducing the chances that following a failure, a second disk would fail during the rebuild.

If I had to redo it today, I would have spend the extra dollars on a fourth disk, and gone RAID 10 instead of RAID 5. I wasn't willing at the time to spend an extra $400 on a fourth disk. Sadly QNAP's RAID Migration does not support 5 -> 10 scenarios, so I'd have to do a full backup and restore to migrate (and my current backup lacks the space to also include the ~5 TB on the main NAS that are actually backups from my PCs and servers, so these would have to be lost).

As stated in the post I need ~100 TB and have 8 bays in the NAS. RAID 10 is literally impossible to implement for this, as well as absurdly expensive and space-inefficient, I really wish people would stop suggesting it ad nauseum.
 
you need to build ZFS Zpool https://wiki.ubuntu.com/ZFS/ZPool

please read about differences and similarities
https://www.klennet.com/notes/2019-07-04-raid5-vs-raidz.aspx

and you add to it HDD for Z1/ Z2 min 4/5HDD and SSD with logs and cache - I am using it from 2015 and do not have any challenges. I download torrent with 1Gb/s and share those too. Writing speed is not a issue and backup too. I stopped use raid 5,6,10 -9y ago. just google it there are big advantages using raidz

I will consider it and read the link you shared... but my question was about rebuild times. How long can I expect it would it take to rebuild a RAID 5 vs RAID 6 (in hours, or days, or whatever) with the NAS system I have of 8 bays of 14TB drives? Are we talking about hours, days or weeks? Is there a simple way to calculate what I could expect (to help decide whether losing a second drive of storage capacity would be worth it for RAID 6?)
 
I've seen tests that show, perhaps counterintuitively, that an SSD cashe does actually not help in this use case. I'm not nerdy enough to explain why, but you can find them on YT.
Read caching is a given. My guess is you are mixing up write through and write cache. A write through cache requires the system to write all writes directly to hard drive. Do not use write cache without a battery backup like an APC. With write cache it should help. This is in general as I have not used your hardware but I know RAID very well.
And the real serious server people use a battery backup controller card when using write cache. The controller card has a small battery on it to protect the write cache.
 
I'm not asking about doing a completely different setup than the 8-drive ~100TB I have. I'm asking about RAID 5 vs 6 considerations for video editing, and rebuild times. There's no "mission critical" reason why the RAID can't be down a week if something happens and it needs to rebuild.



The first couple recoveries!?? How many times do you expect a drive to fail? Good grief... odds are with regular checking and scanning of the disks and replacing any that have an issue, it would be years before any surprises happened. Multiple drive failures would presumably take so many years to happen I'd be onto a completely different system by that point in the future. This project isn't going to last forever, and I do not expect to be using the same NAS hardware a decade from now.



Why would anyone "lose it all" when multiple archival backups are in place? The system can always be rebuilt from scratch, it would just take time... the question I keep asking here is HOW MUCH TIME? If it takes a week to recover RAID5 or RAID6 with (8) 14TB drives, that's fine. I'm not running NASA or an emergency room.

Any disk replacement has the same impact - whether because the previous one failed, is aging or you just decided that shiny spare disk couldn't wait to be part of the array. This is why the parity setups aren't recommended for large arrays. Yes you can replace a disk that fails scrubs, SMART checks etc but that has exactly the same hit on the other disks during recovery as if you waited for it to fail.

I wasn't trying to convince you to swap to RAID10 specifically - just asking questions anyone building an array should think about. In your case you've made the decision the cost of RAID10 isn't worth it and also answered the question of "what if it's out for a week" - both perfectly valid and good decision points when choosing the array type. Similarly, if you have backups then you won't (shouldn't) lose it all but the point of my post was things for anyone building an array to think about - you would be surprised how many people think RAID is backup and gives them bullet proof data protection.

Anyway, given your preference for a parity array for cost reasons I'd take the RAID6 approach simply because it gives slightly more protection. I'm not familiar with the QNAP arrays and how they do the scrub but assuming it's not much more than triggering a disks long SMART test then it will be much quicker than the actual rebuild as each disk is independent and unhindered by other disks, motherboard I/O, CPU etc. For planning purposes and decision making here I would still use a week as the ball park recovery figure - and this is worst case.
 
Read caching is a given. My guess is you are mixing up write through and write cache. A write through cache requires the system to write all writes directly to hard drive. Do not use write cache without a battery backup like an APC. With write cache it should help. This is in general as I have not used your hardware but I know RAID very well.
And the real serious server people use a battery backup controller card when using write cache. The controller card has a small battery on it to protect the write cache.

Video editing is 99.99% a read process on the drives, scrubbing through and playing large files - not a writing process. Either way APCs and controller cards have nothing to do with my question, which was about rebuild times and RAID 5 vs 6.
 
my post was things for anyone building an array to think about - you would be surprised how many people think RAID is backup and gives them bullet proof data protection.

But this isn't a blog article for a general audience - it's an answer to a specific question from one person on a tech forum. And I made it very clear in the OP that I already had more than one backup in place for my data.

So it's annoying to have multiple guys who apparently didn't bother to read what I wrote tell me that "backups are important" while ignoring the actual question in the post. I mean... no kidding backups are important.... maybe next dudes will be explaining that I shouldn't push spinning hard drives off my desk for fun! 😏

For planning purposes and decision making here I would still use a week as the ball park recovery figure - and this is worst case.

Great - that's reasonable and what I wanted to know.
 
If you're actually editing a video, it isn't a 99.99% read process. Your sources don't sound reliable.
 
So it's annoying to have multiple guys who apparently didn't bother to read what I wrote tell me that "backups are important" while ignoring the actual question in the post. I mean... no kidding backups are important.... maybe next dudes will be explaining that I shouldn't push spinning hard drives off my desk for fun! 😏

Relax man - seriously...

Consider the scope of the forum, once things are here, they're here forever...

We've had way too many stories about NAS and data loss - so it's kind of a default that folks mention backups for the NAS box.

It's specifically for your benefit, but for those that stumble across this thread later on - SEO and Google search likely will land others in a similar situation as yours - different workflows perhaps...
 
Similar threads
Thread starter Title Forum Replies Date
nospamever Seeking advice on RAID setup General NAS Discussion 13

Similar threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top