What's new

Looking for fast, big and reliable storage solution for video use

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

You need to think very carefully about your RAID. I would recommend Raid 10. Raid 10 gives you the performance of RAID 0 with the backup of RAID 1. In fact it can give even better performance than RAID 0 on some read request. RAID 0 only if you really don't care about a failure. I would recommend RAID6 only if you are really trying to save money and don't mind a huge write performance penalty and a read speed less than a RAID10. With today's size drives I would never under any circumstances recommend Raid5. In fact you might as well do RAID0 if you are going to do RAID5. Raid 5 really hit the end of its usefulness almost 10 years ago when drives became cheaper and larger. I won't go through the reasons why here but I'll leave you with just a couple of links to articles that explain basically why RAID5 is horrible. Anyway whatever you decide make sure you educate yourself first.

http://www.zdnet.com/article/has-raid5-stopped-working/
https://www.askdbmgt.com/why-raid5-should-be-avoided-at-all-costs.html
http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/

EDIT: Ok I made this statement "I would never under any circumstances recommend Raid5". That is not completely true. If all the drives in your RAID5 are 200GB or less each then I might recommend RAID5.
 
Last edited:
Hello Everyone
I received 6 Hdds yesterday for building my Raid array.
It is consisting of 6 Ironwolf Pro 6TB with a 9217-8i SAS card from LSI. It is only capable of raid 0, 1 and 10.
Raid 0 test is a breeze : 1080MB/s writing, and a little above that reading. 32TB useable.
But RAID 10 is a failure : writing is 120MB/s and reading 150MB/s. 16TB useable.
What am I doing wrong ?
I thought I would be about 500MB/s.
Thank you for your help.
 
That sounds like Raid1. Are you sure you have got the Raid10 configured correctly? You can also try changing the controller to write-back, at least for testing. I don't think that controller has a battery backed cache so if write-back fixes the problem you will have to decide if you want to take a chance with it.
 
Hi
Thank you for your response.
In the card bios, I chose Raid 1/ 10, but selected all 6 drives. There does not seem to be any way to congregate arrays and such... Is the card defective ?
I do not know if I can change to write back or write through, but all is set in write through.
I also have the MegaRAID Storage manager onstalled on the PC, but it does not seem to do anything more than the Bios...
I enclose a picture of the config in windows.
 

Attachments

  • Raid10setups.JPG
    Raid10setups.JPG
    49.6 KB · Views: 543
Hmm... make sure the megaraid card BIOS is up to date - normally these are pretty good cards (I've used similar in carrier grade servers) - under RedHat and Suse, they perform quite well - never tried them with Windows...

As per earlier mention - RAID10 is the way to go...
 
Support replied me this :
"RAID 10 was built by RAID1 + RAID0 (please refer to below picture), based on RAID1 the hard drive performance is only based on one drive speed.
Using 6 hard drives is not a good combination for RAID10 (recommended is 4 or 8 drives)"
 
Hello Everyone
I received 6 Hdds yesterday for building my Raid array.
It is consisting of 6 Ironwolf Pro 6TB with a 9217-8i SAS card from LSI. It is only capable of raid 0, 1 and 10.
Raid 0 test is a breeze : 1080MB/s writing, and a little above that reading. 32TB useable.
But RAID 10 is a failure : writing is 120MB/s and reading 150MB/s. 16TB useable.
What am I doing wrong ?
I thought I would be about 500MB/s.
Thank you for your help.
sounds to me the ideal setup to run zfs.
due to mem and /or ssd caching it will seriously improve your performance.
it will prevent bitrot https://en.m.wikipedia.org/wiki/Bit_rot


Verstuurd vanaf mijn ONEPLUS A5010 met Tapatalk
 
Support replied me this :
"RAID 10 was built by RAID1 + RAID0 (please refer to below picture), based on RAID1 the hard drive performance is only based on one drive speed.
Using 6 hard drives is not a good combination for RAID10 (recommended is 4 or 8 drives)"

Makes sense actually - maybe consider the following...

drive0 and drive1 - RAID1 for OS and apps
drives 2-5 - RAID10 for storage volume

For our servers that had local storage, we usually split things up like that
 
sounds to me the ideal setup to run zfs.
due to mem and /or ssd caching it will seriously improve your performance.

I was wondering when someone would bring that up - really depends on the OS and use cases

zfs has pros and cons - and one needs robust support from the OS - I've used ZFS with Solaris, and there it's good - I'd hesitate to run it under Linux considering the state of development compared to other options (and then there's that pesky GPL that keeps getting in the way) - btrfs is another option.

if OP is running windows, the options there are limited...
 
Support replied me this :
"RAID 10 was built by RAID1 + RAID0 (please refer to below picture), based on RAID1 the hard drive performance is only based on one drive speed.
Using 6 hard drives is not a good combination for RAID10 (recommended is 4 or 8 drives)"

I have been using and setting up RAID arrays for more than 15 years and have never heard of a problem with a 6 drive RAID10. I just did a quick Google search and still can't find anything to support what you were told by the support techs. It is possible that they are correct as I surely don't know everything but I have never heard of this problem. At my house I run a 6 drive RAID10 (actually Mirrored Vdevs) array using FreeNAS (ZFS). I have never had performance problems. A 6 drive RAID 10 should have the Write performance of a 3 drive Raid 0 and the Read performance of a 6 drive Raid 0 (possible even a bit higher depending).
Personally I think something is up with your Raid card.
 
Thans for all your feedback.
@sfx & audio : I just need the raid array for putting medias for NLE video. My OS is on SSD, as is my cache folder etc...
@abailey : I do believe you are right.
I guess it has something to do with the firmware, becaue the Raid 0 configuration is excellent (1080MBs). There seem to be 2 kind of bioses for megaraid. I don't know if I can install the later on my card (or where to find it)...
 
At my house I run a 6 drive RAID10 (actually Mirrored Vdevs) array using FreeNAS (ZFS). I have never had performance problems. A 6 drive RAID 10 should have the Write performance of a 3 drive Raid 0 and the Read performance of a 6 drive Raid 0 (possible even a bit higher depending).
Personally I think something is up with your Raid card.

It should be ok with what ever is done - when working with the megaraid, make sure the groups are balance - where it can get weird is with multiple disks, you could end up with an odd configuration - most RAID10 on Megaraid is usually a RAID0 consisting three pairs of RAID1's

Once created - you need to let the array settle a bit, completing the stripes - this can take up to 48 hours - one can read/write to it right away, but you need to let the array settle before doing benchmarks.

In the FWIW category - depending on the CPU in use - MDADM can be faster, depending on the horsepower of the CPU being used.
 
I was wondering when someone would bring that up - really depends on the OS and use cases

zfs has pros and cons - and one needs robust support from the OS - I've used ZFS with Solaris, and there it's good - I'd hesitate to run it under Linux considering the state of development compared to other options (and then there's that pesky GPL that keeps getting in the way) - btrfs is another option.

if OP is running windows, the options there are limited...
true, but freebsd has perfect zfs support, and one can try omniosce...

Verstuurd vanaf mijn ONEPLUS A5010 met Tapatalk
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top