What's new
  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Raid 4 Drives

john32073

New Around Here
I am getting a 4 Bay NAS Server
I am going to put a 4TB Drive in each bay If I use Raid 1 will that give be a total of 8TB of usage ?
I am going to start with 2 HD at 4TB each and raid 1 them two then as I need more space add 2 more 4TB
would that bring me up to 8TB of usable space? I like the mirror it gives , Lost to much info in the past due to HD failure
Or how would that work if not Raid 1
 
RAID10 or RAID5 - I prefer RAID10, but some will differ on that opinion...

One thing to consider as time goes on...

Reliability of the RAID = Mean Time between Failure / Number of Drives

Always keep a backup of a NAS, simple and plain common sense...

When one drive fails, consider replacing all of them barring early failure - HDD's are very reliable until they meet their end of life - if all the drives are the same age - in my experience, when one dies, the others show up soon after to celebrate.
 
You probably want to run RAID 10. I don't think RAID 5 runs very well on 4 drives. You really need more drives for RAID 5.

Drives are so big now days you could run 2 RAID 1 and still have a large space on each. You could run all separate drives and just cross backup the important stuff and have more drive space. Just depends on what you want.
 
I did not say it would not work, I just said it would not work well. RAID 5 was built for a lot of drives small drives The problem now is the drives are so big it takes a long time to resync when you have a problem. It can be days to resync. I have run many RAID 5 RAIDs in hot swap mode in the old days.
 
I did not say it would not work, I just said it would not work well. RAID 5 was built for a lot of drives small drives The problem now is the drives are so big it takes a long time to resync when you have a problem. It can be days to resync. I have run many RAID 5 RAIDs in hot swap mode in the old days.
software raid, some right utilities and a fast controller can help. I remember in a defrag software that takes ages just to defrag a drive blaming the controller, when most controllers are in passthrough mode and let the CPU handle it.

big drives need not take that long, its dependent on how well the utility/software is made for doing it and how much ram it can use for its meta search, how much CPU power it can use as well as the only limiting factor for the controller is really just the bus bandwidth and drive speeds.

Im gonna be rebuilding my 9TB raid 5 into 12TB, so am searching for a good way to do this, it may take just a few hours or days, but we'll see as i have drives scattered over controllers. I built it using software via linux but i think it runs on btrfs.

Unless you have a UPS i'd suggest raid 6 though.
 
software raid, some right utilities and a fast controller can help. I remember in a defrag software that takes ages just to defrag a drive blaming the controller, when most controllers are in passthrough mode and let the CPU handle it.

big drives need not take that long, its dependent on how well the utility/software is made for doing it and how much ram it can use for its meta search, how much CPU power it can use as well as the only limiting factor for the controller is really just the bus bandwidth and drive speeds.

Im gonna be rebuilding my 9TB raid 5 into 12TB, so am searching for a good way to do this, it may take just a few hours or days, but we'll see as i have drives scattered over controllers. I built it using software via linux but i think it runs on btrfs.

Unless you have a UPS i'd suggest raid 6 though.

I would not run software RAID. It was not dependable in the past. You want to use hardware RAID.

If you have good hardware RAID you should be able to expand the RAID. It may take a long time but it should work. You can do it faster buy copying your data to another system rebuild the RAID as the bigger one and copy the data back.

If you want faster RAID then cache your writes. You will gain a lot of speed. The NCQ is at its best when write caching is on. The caveat is you can corrupt your RAID if you do not run a good APC. You need a good battery backup.
 
Last edited:
I would not run software RAID. It was not dependable in the past. You want to use hardware RAID.

If you have good hardware RAID you should be able to expand the RAID. It may take a long time but it should work. You can do it faster buy copying your data to another system rebuild the RAID as the bigger one and copy the data back.

If you want faster RAID then cache your writes. You will gain a lot of speed. The NCQ is at its best when write caching is on. The caveat is you can corrupt your RAID if you do not run a good APC. You need a good battery backup.
software raid overtook hardware raid a few years ago, hardware raid has its uses like being able to add ram to a card for caching, along with battery backups and some features, but the performance of software raid with its flexibility and compatibility is actually better, less chance of losing data because you can actually put the drives in any PC and detect it easily.
 
software raid overtook hardware raid a few years ago, hardware raid has its uses like being able to add ram to a card for caching, along with battery backups and some features, but the performance of software raid with its flexibility and compatibility is actually better, less chance of losing data because you can actually put the drives in any PC and detect it easily.

So you are talking workstations? Not big servers?

I would think running a gig or 2 of RAM cache on hardware RAID with write caching on would beat any software RAID. You need NCQ drives for big servers.
 
I would not run software RAID. It was not dependable in the past. You want to use hardware RAID.

If you have good hardware RAID you should be able to expand the RAID. It may take a long time but it should work. You can do it faster buy copying your data to another system rebuild the RAID as the bigger one and copy the data back.

If you want faster RAID then cache your writes. You will gain a lot of speed. The NCQ is at its best when write caching is on. The caveat is you can corrupt your RAID if you do not run a good APC. You need a good battery backup.

Most consumer NAS boxes run SW Raid these days on linux mdadm/lvm - it's reliable enough, and the tech is proven. There's a lot of reasons for this, but generally it comes down to "it works"

Things were different back in the old days with Win95 and RAID chipsets/drivers there...

Key thing - and I agree - is whether HW or SW RAID, keep a back up - as I mentioned earlier in this thread, the odds are actually against one when considering that one is splitting time between failures across multiple drives, and drives will fail at some point.
 
I don't remember any big RAID boxes with Win95. IBM was building some with OS/2. Novell and IPX was king back in those days before Microsoft. We had SCSI but no RAID back then. Then Microsoft Server NT was where I saw software RAID for the first time but hardware RAID from IBM was much better. We bought over 50 hardware RAID servers back then. The CPUs were just not powerful enough to run software RAID. But the real hardware RAID started with Microsoft Server 2000, 2003 and then of course Server 2008. The first server hardware RAID boxes were SCSI. Then the really big ones were Fiber channel. SATA was a lot later in life. I have run all of it at one point or another over my lifetime.

So are you saying all NAS boxes now run software RAID or just the smaller consumer boxes?
 
Last edited:
I don't remember any big RAID boxes with Win95. IBM was building some with OS/2. Novell and IPX was king back in those days before Microsoft. We had SCSI but no RAID back then. Then Microsoft Server NT was where I saw software RAID for the first time but hardware RAID from IBM was much better. We bought over 50 hardware RAID servers back then. The CPUs were just not powerful enough to run software RAID. But the real hardware RAID started with Microsoft Server 2000, 2003 and then of course Server 2008. The first server hardware RAID boxes were SCSI. Then the really big ones were Fiber channel. SATA was a lot later in life. I have run all of it at one point or another over my lifetime.

So are you saying all NAS boxes now run software RAID or just the smaller consumer boxes?
even ARM can do raid 6 with ease.
Hardware RAIDs now are for specific purpose, because you dont have compatibility, may not even be between vendors, so when you change the hardware, or cards, you may not be able to use the RAID and you may just lose all your data. with software raid, any compatible OS will read it. A linux/Unix OS will read all raids, even MS ones. MS will only read their own raids but that covers a larger area than hardware RAID. Something goes wrong you can plug em into another computer or chain via USB and boot up windows even via USB if you wish and you can access it and use it like normal. With a hardware RAID that may not be as simple.

Hence software raid isnt just works, but its simpler and it works better. There are times when you need hardware RAID but many even in datacenter found it pointless to use a hardware RAID card unless you have so many drives that you need the channels and bandwidth. For example, when you have 30 drives and you want to RAID them but need the bandwidth, you could run 4 RAID cards and RAID across them using the cards or just buy cheapo SATA cards for the job, but once you start using SAS and 16 drives per card, thats when you want RAID card as even in software RAID mode, they will reliably handle the drives that are too many for motherboard IO and bandwidth. This is currently the only avenue for hardware RAID. Even in the datacenter software RAID is used, for instance a boot drive may have 2 SSDs in raid 1 via the motherboard which is basically software RAID as any motherboard raid (fakeraid) is also interchangeable between boards and OSes (except ubuntu) but with driver needs in windows, it is this flexibility with software RAID that is what made it overtake hardware RAID, and also cost, so no longer did you need raidcards for performance, you can use the OS, with its ram to cache it (and loads of fast ram too). Software RAID performance for CPUs produced in the last 10 years is a joke to them since they can run them to their full bandwidths with barely any CPU usage, hence can make use of various features like ZFS caching and snapshot with its easy and quick config management for the RAID instance, with the ability for instant spares.
 
Do you get hot swap with software RAID? A lot of times you do not want to turn off old drives when you need to change one out. Do you have the means to use write caching? Write though mode is a lot slower than organizing hard drive writes because the drive has to stop what it is doing to perform a write operation. This incurs rotational delay and head movement out of sync with the read operations. Using NCQ with write caching solves these slowdowns.

This is assuming you have a network fast enough to support all this. With todays standard I would think you would need at least a 10 gig server connection.
 
Last edited:
Do you get hot swap with software RAID? A lot of times you do not want to turn off old drives when you need to change one out. Do you have the means to use write caching? Write though mode is a lot slower than organizing hard drive writes because the drive has to stop what it is doing to perform a write operation. This incurs rotational delay and head movement out of sync with the read operations. Using NCQ with write caching solves these slowdowns.

This is assuming you have a network fast enough to support all this. With todays standard I would think you would need at least a 10 gig server connection.
you can on linux/ZFS. Even on windows using intel raid you get lots of options for caching. Hard drives all come with NCQ which means the performance is mainly dependent on the application and not raid controller, because it means that i can hammer the drives with a lot of IO and get more performance out of it.
 
@coxhaus - one can do hotswap on software RAID - some of the off the shelf NAS units have specific procedures, but mdadm/lvm allow for this.

BTW - LVM can to RAID without mdadm, but I've heard it's been to use mdadm and lvm over it for volume management.

ZFS supports hotswap, and I believe btrfs does this in similar fashion.

I've had both software RAID and hardware RAID in production (LSI MegaRAID), performance is similar - it's more the rest of the machine that makes the difference in performance.
 
you can on linux/ZFS. Even on windows using intel raid you get lots of options for caching. Hard drives all come with NCQ which means the performance is mainly dependent on the application and not raid controller, because it means that i can hammer the drives with a lot of IO and get more performance out of it.

This does not make sense to me. The application does not talk NCQ. It is the chipset. NCQ is a performance thing built for heavy multi-tasking. Only drives built for a server or a NAS have NCQ.

NCQ comes into it's own when the drives get loaded down and the bottle neck is at the drives. Just running light reads or writes you probably will not see a difference. It is when the drives start lagging the requests that NCQ comes into play. Then there is a big difference using NCQ.

Typically the NCQ drives have a lot of cache on them.

I just bought a 4TB Segate IronWolf drive at Frys which has NCQ. I am using it in a work station because the price was good. They were substituting this drive for another on sale that they were sold out of. But it states on the package it is built for a NAS with NCQ.
 
Last edited:
I have run many LSI MegaRAID cards. Dell sold them in their servers back many years ago.

One other thing which comes to mind is I do not believe you can bottle neck drives now days with a gig connection. Drives now move too much data. You would need to be 10 gig or more. Maybe a Lagg with multiple gig connections on a small RAID. I have been retired too long to know for sure. RAID was built to gain speed and redundancy. Drives lagged terribly in the old days. Now days drives are much faster. But there is a point where the drive become the bottle neck.

Of course SSDs can solve the speed problems but not the bulk storage problem. I am sure SSD have a limit also but it is up there.
 
Last edited:
@coxhaus - one can do hotswap on software RAID - some of the off the shelf NAS units have specific procedures, but mdadm/lvm allow for this.

BTW - LVM can to RAID without mdadm, but I've heard it's been to use mdadm and lvm over it for volume management.

ZFS supports hotswap, and I believe btrfs does this in similar fashion.

I've had both software RAID and hardware RAID in production (LSI MegaRAID), performance is similar - it's more the rest of the machine that makes the difference in performance.

I have been reading on hot swap and it looks like to me software RAID does not support hot swap. I have mostly been reading about ZFS which is new to me. It did not exist in the old days. It looks like Microsoft uses ZFS in their server storage solutions. Would you please explain hot swap further when using software RAID?

We had some old RAIDs which there was not any way we were going to power all the old drives down. I would have bet we would lose the RAID if we did a power down due to some of the drives not surviving the power down and up. RAIDs age and you do not always get money to replace them.

It is the old bean counter syndrome where they cut your budget and you have to make do with what you have.

PS
Update. There does not seems to be a hot swap for build it your self NAS boxes with software RAID. There are some NAS software RAID boxes which have implemented hot swap on their hardware. It is a package deal. Let me know if I missing something? I am kind of out of date with RAID.

I also found this...
https://mangolassi.it/topic/12047/zfs-is-perfectly-safe-on-hardware-raid

And this I found on the internet

"I have never had a ZFS based storage system that has used raid cards, only HBA's or their equivalent, so the whole cache thing is not something I think about on that level. That's what system ram, dedicated cache cards or ssd's are for, managed by zfs directly.
Actually that adds risk. If you use ZFS' memory caching mechanism, it creates the very risk that FreeNAS is claiming about hardware RAID. Normal hardware RAID actually protects against a risk that ZFS creates if used the way that people tend to use it. Hardware RAID uses cache cards and SSDs, just like ZFS can. But unlike ZFS, hardware RAID can ensure that volatile system RAM is never used which is where the big risk comes from. No matter how much ZFS manages it, ZFS assumes that there is no risk from a loss of power... but there is."

Found on Spiceworks: https://community.spiceworks.com/to...hba?utm_source=copy_paste&utm_campaign=growth
 
Last edited:
I have been reading on hot swap and it looks like to me software RAID does not support hot swap. I have mostly been reading about ZFS which is new to me. It did not exist in the old days. It looks like Microsoft uses ZFS in their server storage solutions. Would you please explain hot swap further when using software RAID?

With mdadm/lvm, it's really easy...

One has to go into mdadm, mark the failed drive, tell mdadm it's going to be removed, put the drive into standby, and then remove and replace, copy the partition map, and add the replaced drive back into the array, and start the rebuild.

... assumption here is that the chassis/case has removable drive bays...

My example - the raid is md0, the failed drive is sdb1 - note, this is either done as root, or via sudo for a user that is allowed..

mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md0 --remove /dev/sdb1
hdparm -Y /dev/sdb

Remove and replace the drive... now to re-add sdb1 to the array... the first command partitions the drive, copying the partition map from one of the sibling drives in the array...

sfdisk -d /dev/sdc | sfdisk /dev/sdb
mdadm --manage /dev/md0 --add /dev/sdb1

Done... mdadm will start rebuilding things... you can check status of the rebuild by

cat /proc/mdstat

Don't have to tell lvm anything, since it's pointing at md0, not the physical drives that compose md0
 
BTW - that's also have to "grow" an mdadm/lvm array with a rolling rebuild of md0, and then depending on the file system on top of lvm, one can grow the filesystem to the new size.

clever stuff.... one of the other fun facts is that when first creating the mdadm/lvm array, as soon as the file system has been laid down, one can put it right into production, don't need to wait for the striping to finish, that all happens in the background.

If one has a raspberry pi, usb hub, and four thumb-drives, I can send you a how-to to explore - hot swapping works there just the same.

ZFS is equally cool - my experience there is on solaris/sparc, not on linux, but it has the ways and means to manage things just as easily.
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Back
Top