What's new

RAID 5 vs RAID 6 ??

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

BTW - that's a pretty nice NAS - keep in mind that it's not a backup, but a working store - I do suggest keeping content and archival footage on other media for backup purposes.

Did you just hit reply without bothering to read the post? "All of the footage is and will be backed up multiple times to other archival hard drives."
 
Nobody ever accused RAID10 of being cheap - that's not the point of it.

The different RAID options give you a choice between speed, size, cost and availability. The question you need to answer is how long can your array be down for before its a problem - and given the backups are on lots of different disks, how much of your own time do you have to dedicate to the recovery (rather than just leaving it to sync for over a week before you can use it which is what it will probably be).

I'm not asking about doing a completely different setup than the 8-drive ~100TB I have. I'm asking about RAID 5 vs 6 considerations for video editing, and rebuild times. There's no "mission critical" reason why the RAID can't be down a week if something happens and it needs to rebuild.

The risk of RAID5 and RAID6 during recovery is on the fact during recovery you do a full read of every single disk. The first couple of recoveries will probably be OK but after that it gets dicey as the remaining disks all have significantly more wear than the others - and there's no going back or undoing that damage. If you're doing something that's read intensive in the first place that makes this worse.

The first couple recoveries!?? How many times do you expect a drive to fail? Good grief... odds are with regular checking and scanning of the disks and replacing any that have an issue, it would be years before any surprises happened. Multiple drive failures would presumably take so many years to happen I'd be onto a completely different system by that point in the future. This project isn't going to last forever, and I do not expect to be using the same NAS hardware a decade from now.

Where RAID10 really comes in to its own is a large array with more disks - RAID5/6 can be lethal in that environment but RAID10 limits the risk and doesn't put excessive unnecessary wear on the whole array.

Personally I've gone back and forth in my mind on the benefits of RAID5/6 vs RAID10 on a home system but it all comes down to an individuals requirements - is it an annoyance if you lose it all but you shrug your shoulders and carry on or is it project over if the array is down for a week or 2 to recover etc.

Why would anyone "lose it all" when multiple archival backups are in place? The system can always be rebuilt from scratch, it would just take time... the question I keep asking here is HOW MUCH TIME? If it takes a week to recover RAID5 or RAID6 with (8) 14TB drives, that's fine. I'm not running NASA or an emergency room.
 
With such large capacity drives, I would avoid RAID 5. And due to the large number of drive (meaning higher risk of failure on a rebuild), I would consider RAID 6 as a minimum, provided you do have a reliable backup in case of additional failures during a rebuild.

Ideally, I would have considered getting higher capacity drives and going RAID 10 (to improve performance and also better protection in case of failures).

If that NAS supports it, consider adding an SSD cache. If an editor is doing a lot of scrubbing in a project, having that data in an SSD cache can greatly help performance (I assume you have at least 2.5 Gbps, or even 10 Gbps Ethernet). Go for a quality SSD that has a higher endurance rather than cheaper/higher performance ones.

I've seen tests that show, perhaps counterintuitively, that an SSD cashe does actually not help in this use case. I'm not nerdy enough to explain why, but you can find them on YT.

Regarding rebuild times - my own QNAP has three 10 TB in RAID 5. I run a RAID scrub once per month, that scrub (with priority sets to either low or medium, I forgot) takes about 15-18 hours. So with 14 TB, it would be reasonable to assume somewhere around 24 hours of rebuild/scrub time.

Thank-you for actually stating a number!! 24 hours is nothing. People have made it sound like a rebuild would take months or something...

That scrub, in addition to ensuring data integrity, also helps having early detection of disk failures, reducing the chances that following a failure, a second disk would fail during the rebuild.

If I had to redo it today, I would have spend the extra dollars on a fourth disk, and gone RAID 10 instead of RAID 5. I wasn't willing at the time to spend an extra $400 on a fourth disk. Sadly QNAP's RAID Migration does not support 5 -> 10 scenarios, so I'd have to do a full backup and restore to migrate (and my current backup lacks the space to also include the ~5 TB on the main NAS that are actually backups from my PCs and servers, so these would have to be lost).

As stated in the post I need ~100 TB and have 8 bays in the NAS. RAID 10 is literally impossible to implement for this, as well as absurdly expensive and space-inefficient, I really wish people would stop suggesting it ad nauseum.
 
you need to build ZFS Zpool https://wiki.ubuntu.com/ZFS/ZPool

please read about differences and similarities
https://www.klennet.com/notes/2019-07-04-raid5-vs-raidz.aspx

and you add to it HDD for Z1/ Z2 min 4/5HDD and SSD with logs and cache - I am using it from 2015 and do not have any challenges. I download torrent with 1Gb/s and share those too. Writing speed is not a issue and backup too. I stopped use raid 5,6,10 -9y ago. just google it there are big advantages using raidz

I will consider it and read the link you shared... but my question was about rebuild times. How long can I expect it would it take to rebuild a RAID 5 vs RAID 6 (in hours, or days, or whatever) with the NAS system I have of 8 bays of 14TB drives? Are we talking about hours, days or weeks? Is there a simple way to calculate what I could expect (to help decide whether losing a second drive of storage capacity would be worth it for RAID 6?)
 
I've seen tests that show, perhaps counterintuitively, that an SSD cashe does actually not help in this use case. I'm not nerdy enough to explain why, but you can find them on YT.
Read caching is a given. My guess is you are mixing up write through and write cache. A write through cache requires the system to write all writes directly to hard drive. Do not use write cache without a battery backup like an APC. With write cache it should help. This is in general as I have not used your hardware but I know RAID very well.
And the real serious server people use a battery backup controller card when using write cache. The controller card has a small battery on it to protect the write cache.
 
I'm not asking about doing a completely different setup than the 8-drive ~100TB I have. I'm asking about RAID 5 vs 6 considerations for video editing, and rebuild times. There's no "mission critical" reason why the RAID can't be down a week if something happens and it needs to rebuild.



The first couple recoveries!?? How many times do you expect a drive to fail? Good grief... odds are with regular checking and scanning of the disks and replacing any that have an issue, it would be years before any surprises happened. Multiple drive failures would presumably take so many years to happen I'd be onto a completely different system by that point in the future. This project isn't going to last forever, and I do not expect to be using the same NAS hardware a decade from now.



Why would anyone "lose it all" when multiple archival backups are in place? The system can always be rebuilt from scratch, it would just take time... the question I keep asking here is HOW MUCH TIME? If it takes a week to recover RAID5 or RAID6 with (8) 14TB drives, that's fine. I'm not running NASA or an emergency room.

Any disk replacement has the same impact - whether because the previous one failed, is aging or you just decided that shiny spare disk couldn't wait to be part of the array. This is why the parity setups aren't recommended for large arrays. Yes you can replace a disk that fails scrubs, SMART checks etc but that has exactly the same hit on the other disks during recovery as if you waited for it to fail.

I wasn't trying to convince you to swap to RAID10 specifically - just asking questions anyone building an array should think about. In your case you've made the decision the cost of RAID10 isn't worth it and also answered the question of "what if it's out for a week" - both perfectly valid and good decision points when choosing the array type. Similarly, if you have backups then you won't (shouldn't) lose it all but the point of my post was things for anyone building an array to think about - you would be surprised how many people think RAID is backup and gives them bullet proof data protection.

Anyway, given your preference for a parity array for cost reasons I'd take the RAID6 approach simply because it gives slightly more protection. I'm not familiar with the QNAP arrays and how they do the scrub but assuming it's not much more than triggering a disks long SMART test then it will be much quicker than the actual rebuild as each disk is independent and unhindered by other disks, motherboard I/O, CPU etc. For planning purposes and decision making here I would still use a week as the ball park recovery figure - and this is worst case.
 
Read caching is a given. My guess is you are mixing up write through and write cache. A write through cache requires the system to write all writes directly to hard drive. Do not use write cache without a battery backup like an APC. With write cache it should help. This is in general as I have not used your hardware but I know RAID very well.
And the real serious server people use a battery backup controller card when using write cache. The controller card has a small battery on it to protect the write cache.

Video editing is 99.99% a read process on the drives, scrubbing through and playing large files - not a writing process. Either way APCs and controller cards have nothing to do with my question, which was about rebuild times and RAID 5 vs 6.
 
my post was things for anyone building an array to think about - you would be surprised how many people think RAID is backup and gives them bullet proof data protection.

But this isn't a blog article for a general audience - it's an answer to a specific question from one person on a tech forum. And I made it very clear in the OP that I already had more than one backup in place for my data.

So it's annoying to have multiple guys who apparently didn't bother to read what I wrote tell me that "backups are important" while ignoring the actual question in the post. I mean... no kidding backups are important.... maybe next dudes will be explaining that I shouldn't push spinning hard drives off my desk for fun! 😏

For planning purposes and decision making here I would still use a week as the ball park recovery figure - and this is worst case.

Great - that's reasonable and what I wanted to know.
 
If you're actually editing a video, it isn't a 99.99% read process. Your sources don't sound reliable.
 
So it's annoying to have multiple guys who apparently didn't bother to read what I wrote tell me that "backups are important" while ignoring the actual question in the post. I mean... no kidding backups are important.... maybe next dudes will be explaining that I shouldn't push spinning hard drives off my desk for fun! 😏

Relax man - seriously...

Consider the scope of the forum, once things are here, they're here forever...

We've had way too many stories about NAS and data loss - so it's kind of a default that folks mention backups for the NAS box.

It's specifically for your benefit, but for those that stumble across this thread later on - SEO and Google search likely will land others in a similar situation as yours - different workflows perhaps...
 
If you're actually editing a video, it isn't a 99.99% read process. Your sources don't sound reliable.
Um, it SHOULD be 99.99% read. That’s how the technology of video editing works. If it’s not 99.99% read, you’re doing something wrong.

When you’re editing a video, you’re using editor software (Adobe Premiere, FCPX, etc.) to build a timeline of what clips of which source videos* you want shown, and in what order. But the “clips” in the timeline are just pointers, they’re references to the source files, that your NLE just keeps reading from again and again as you edit.

You never rewrite your source files, because doing so would degrade the quality of your source file each time you did, like copying a VHS tape repeatedly. So when you “edit a clip” you’re not writing anything to the source videos on your external storage. You’re just changing the contents of a timeline file (effectively a table / spreadsheet of your intended clips and edits, probably kept on your primary SSD for performance) that describes what start and endpoint of the source video you intend to use as a “clip”, and what modifiers you intend to apply to it when you write your final output video later at the end of your editing.

When you are done editing and export a final video file, your NLE creates that final file, by reading from different parts of all your original source videos, combining them in the right order, and applying your saved edits to each of them as the final output video is written. So each single time you do write out a video, you’re reading from any number of source files, and possibly multiple different times from the same source file. Could be dozens or even hundreds of reads, per write. And that’s the times you are writing.

So yeah, 99.99% read. Give or take a decimal point.

*Source videos while editing may be the originals, or smaller proxy files you create to reduce your CPU workload while editing, but either way the process works the same. You can also generate intermediary render files to reduce CPU overhead while editing / previewing, but you don’t have to.
 
Will when you write the finished video it will interrupt reading if you don't have write cache. This is very important if you have multi-user RAID. Single user not so much.

Things like the heads on the drives in the RAID have to reposition for the write through with no cache. It hurts performance. Write cache can help all this but it is dangerous without solid power backup.
 
Will when you write the finished video it will interrupt reading if you don't have write cache. This is very important if you have multi-user RAID. Single user not so much.

Things like the heads on the drives in the RAID have to reposition for the write through with no cache. It hurts performance. Write cache can help all this but it is dangerous without solid power backup.
That’s why I don’t write the rendered video directly back out to the NAS. I’ll do it to local storage, a single output file is very very small compared to my source material, especially since it is more heavily compressed than my high-bitrate source videos. And as you noted, writing back to the NAS while reading source files could slow things down. Once I have final deliverable files I’m happy with, I’ll move them to the NAS.

Which is about the only time I ever need to write to the NAS.

I don’t really need a write cache because I’m really never writing to the NAS, except when I’m batch loading a whole day’s worth of video, or final deliverable files. Like I said, 99.99% read is accurate.
 
A write cache will allow you to write to the NAS and it will be cached. Then written out if the heads are close or as it fits in the Q to where it is efficient to write the data. This really speeds things up on a server.

The bad thing is the data will be corrupt if you lose power.
 
Um, it SHOULD be 99.99% read. That’s how the technology of video editing works. If it’s not 99.99% read, you’re doing something wrong.

When you’re editing a video, you’re using editor software (Adobe Premiere, FCPX, etc.) to build a timeline of what clips of which source videos* you want shown, and in what order. But the “clips” in the timeline are just pointers, they’re references to the source files, that your NLE just keeps reading from again and again as you edit.

You never rewrite your source files, because doing so would degrade the quality of your source file each time you did, like copying a VHS tape repeatedly. So when you “edit a clip” you’re not writing anything to the source videos on your external storage. You’re just changing the contents of a timeline file (effectively a table / spreadsheet of your intended clips and edits, probably kept on your primary SSD for performance) that describes what start and endpoint of the source video you intend to use as a “clip”, and what modifiers you intend to apply to it when you write your final output video later at the end of your editing.

When you are done editing and export a final video file, your NLE creates that final file, by reading from different parts of all your original source videos, combining them in the right order, and applying your saved edits to each of them as the final output video is written. So each single time you do write out a video, you’re reading from any number of source files, and possibly multiple different times from the same source file. Could be dozens or even hundreds of reads, per write. And that’s the times you are writing.

So yeah, 99.99% read. Give or take a decimal point.

*Source videos while editing may be the originals, or smaller proxy files you create to reduce your CPU workload while editing, but either way the process works the same. You can also generate intermediary render files to reduce CPU overhead while editing / previewing, but you don’t have to.

What you're missing is the preview that is generated in real time via a cache drive/partition while you're editing.

Editing otherwise will make no sense and won't be 'editing'.

There is no video editing setup that can cache those writes in RAM, indefinitely, until you decide you want to export your 'edits'. Editing requires writes, as the editing happens. Pointers don't do the actual edits.
 
What you're missing is the preview that is generated in real time via a cache drive/partition while you're editing.
What you’re missing is that with an intelligent workflow, real-time preview is rendered in real time and isn’t “written” anywhere.

Editing in low enough detail that you can watch in real-time is easy now. Especially if you properly use proxy files. Playback in real-time with 1080p H264 proxies has been child’s play on modern hardware for a bit now. On my new MacBook M3 Ultra I can do real-time preview of 4-way split-screen video using original source 4K ProRes files, if I’m not layering on too many effects.

You really have no idea how these tools work, do you? This isn’t 1999, you’re not trying to edit together from two DVCAM tape machines at a time.
 
You're really missing the point here, or, you're really clueless.

Regardless of the resolution used to render, writing to disk is required.

Unless you have a TB+ of RAM to play with during video editing, you're writing to disk. Period.
 
Similar threads
Thread starter Title Forum Replies Date
nospamever Seeking advice on RAID setup General NAS Discussion 13

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top