What's new

Drive throughput vs network bandwidth

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

usao

Occasional Visitor
Im looking to build a 100Tb iscsi nas/san at home using the newer ST8000NM0075 drives.
These drives are SAS 7200RPM drives and are 8Tb in size.
Im trying to figure an optimal solution which will save money but still be performant.
A single 8 drive configuration will be about 50Tb in Raid-5, but I think the performance of the Raid-5 will be higher than a 1GBE connection can support, but I don't think it will be fast enough to support a 10GBe connection either.
So, im wondering if it would be better to use 4 iSCSI arrays, each with 4 drives in a 3+1 configuration, and each using a 1GBe port? That should yeild around 4GByte/sec throughput, although it would be a bit less space due to extra raid overhead of 4x Raid5 instead of 2x Raid5 needed in a 7+1 config.
I would like to get the higher throughput and get at least 4GByte/sec if possible to the host using a combination of NAS arrays ,but need to keep costs down as much as possible due to this being a personal expense.

I originally tried this using a EMC CX4 and HPDL380 configuration, but got poor performance:
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
dm-12 0.00 0.00 4510.40 5136.40 281.90 321.02 128.00 1751.40 181.91 0.10 100.08

Ideally, I would like to get about 10x the throughput to the host.

Looking for suggestions/ideas.
 
I think you may be underestimating what 8 of those SAS drives could do in RAID5 with the proper controller?

Have you considered a RAID10 setup instead? Should be faster and more reliable.

A single 10GbE port is greater than the sum of 10 1GbE ports in any LAG configuration (for latency, speed).

What is this NAS/SAN doing with all this storage capacity and performance? If it is more storage oriented, then pursue the 1GbE port options. If performance is the end game (as it seems it is), then opting to not give this NAS/SAN a real connection (10GbE) to the rest of your systems is being a little short sighted, imo.
 
Have you considered a RAID10 setup instead? Should be faster and more reliable.

Completely agree here - RAID5 will only tolerate a loss of a single disk in the array, and more drives in the array, statistics suggest a higher chance of failure over time...

RAID10 would be a safer approach...
 
A single 8 drive configuration will be about 50Tb in Raid-5, but I think the performance of the Raid-5 will be higher than a 1GBE connection can support, but I don't think it will be fast enough to support a 10GBe connection either.
So, im wondering if it would be better to use 4 iSCSI arrays, each with 4 drives in a 3+1 configuration, and each using a 1GBe port? That should yeild around 4GByte/sec throughput, although it would be a bit less space due to extra raid overhead of 4x Raid5 instead of 2x Raid5 needed in a 7+1 config.
I would like to get the higher throughput and get at least 4GByte/sec if possible to the host using a combination of NAS arrays ,but need to keep costs down as much as possible due to this being a personal expense.

Two RAID10's should get you much closer - esp. for write performance, and read performance should be as good or better than RAID5.

You didn't mention if you were using software RAID (e.g. mdadm or similar) or a HW raid controller like an LSI MegaRaid. I've found in large arrays, the Megaraid does fairly well, even with a couple of fast Xeon's pushing it....
 
RAID10 has 2 downsides.
1) Much more expensive
2) Faster but not as safe as RAID6. Depending on which 2 drives fail in a RAID10 (specifically both sides of a mirror), you could loose the lun, while with RAID6 you will be ok regardless of which 2 drives fail.
Yes, im using the LSI controller, but haven't selected which one yet.
 
RAID6 has the same performance issues as RAID5, a bit more safety, but it's similar there to RAID10.

Make your own choices - it's your array - I've run data center gear, and when I deployed the NAS into my home network - it was RAID10 based on read/write performance and attempting to get a bit past the statistical odds of the array failing.

Cost wasn't really an issue with disks compared to the cost of the NAS itself in my case - need more space, find larger drives, spend some money...

There's pros and cons with any RAID type - as an informed/experienced person, I made my choice.
 
Faster but not as safe as RAID6. Depending on which 2 drives fail in a RAID10

Two ways you can build a RAID10 in a 4 disk array (just using this an example) - two RAID0's or two RAID1's..

Two RAID1's, you can lose two drives, and not have a problem...
 

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top