What's new

Advice on a 8 bay NAS /iSCSI decision and disk configuration.

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

HotFix

Occasional Visitor
Greetings,

I finally created an account on this system after snooping around passively for years. :) I am getting very close to buying a new NAS/iSCSI system and wanted a reality check to where I am now in the thought process.

After an exhaustive search for a 1GB 24 port fanless (noise concern) managed switch, I settled on and purchased an HP ProCurve 1810-24G v2 for ~$200. I could have probably gone for a more hard core switch but this one met all of my requirements and I think will give me just enough web based management to get the job done.

Next up is the NAS/iSCSI system. I want to get an 8 drive system because I want to future proof it somewhat and not have to turn around and buy an add-on storage enclosure. I looked around and the two vendors that keep coming to the top of my list over and over again are QNAP and Synology.

For Synology the 8 bay model question is easy as that is the DS1813+. For QNAP I have a number of 8 bay options, but the one I think aligns closely with the DS1813+ for comparison purposes is the TS-870 non-Pro. They both have 4 NIC ports that I want (although 2 of the are upgradeable to 10Gb on the QNAP which the Synology can't do) and the QNAP non-Pro unit looks to have a better processor than the Synology unit. The QNAP is a little more expensive, but I am seeing online that they keep coming out with a 30% off coupon which would make them a little cheaper, but otherwise they are close enough in price.

I emailed Synology and asked them a bunch of questions. I liked most of their responses but the two things that I didn't like is they couldn't explain to me why the eSATA connections on the DS1813+ are 3.0Gbs (when the main unit will run the internal drives at 6.0Gbs) and that their software cannot dynamically expand RAID 1+0 on the fly (which kind of dumbfounded me since I have been doing this on server grade RAID controllers for like 15 years).

The QNAP on the other hand doesn't have any eSATA external storage unit expansion capability today which I shouldn't need any time soon and hopefully by the time I do they will have a solution. The OS doesn't look as "polished" as the Synology, but it does appear to have everything I would want. The QNAP does do everything RAID that I would expect a SAN to do, and I can even upgrade the process (voiding the warranty) in the future if I want to, so I have no real complains about the QNAP.

So assuming I can get that 30% off coupon, I will be getting the TS-870 non-Pro as I think it has everything I want and gives me the maximum flexibility in the future (2 x NIC port upgrades and processor upgrade) and it would be cheaper than the Synology unit. Sure the Synology SHR is a nice feature, but according to Synology it performs a little worse than RAID 5 so I don't think I will miss it as I don't plan on putting in random sized drives.

Either way though I intend on teaming 2 x NIC ports on the NAS and making them the "production network" NAS interface. This is the way my desktops, mobile devices, consoles, and whatever else will traditionally access the server. I like 2 x NIC ports in a link aggregation team so that no one single NIC'ed device can overload the interface to the NAS. I.E. Doing a massive file copy from one single NIC'ed machine to the NAS shouldn't really cause media streaming network congestion (disk I/O contention aside) to another device.

The other 2 ports I plan on teaming (or whatever the recommended practice is) for iSCSI as I need to run a number of VMs in my home lab. I work for a MAJOR software manufacturer and I need to be able to test various scenarios with multiple VMS, so iSCSI I/O is important to me since really sluggish VMs annoy the heck out of me (and slow me down). I plan on using two Hyper-V hosts in a cluster, each with 2 iSCSI NICs in an MPIO configuration. I understand that one 2 iSCSI NIC server could technically overwhelm the NAS box's 2 x NIC port configuration, but the chances of that are slim considering I am talking about a limited number of iSCSI hosts.

The iSCSI NIC team and the production network NIC team will be on different VLAN/Subnets to try and mirror a typical production environment.

So reality check #1 time - anyone see any flaws with my NAS enclosure leanings or my NIC configuration choice/ideas? I welcome all constructive feedback even if I don't end up agreeing with it. :)

Next up is the disk configuration in the NAS. When I started going down this 8 drive bay system route, I was leaning heavily to RAID 1+0. I hate the idea of "wasting" the space of 1/2 of my drives, but iSCSI read and write I/O is important to me. However I recently read about SSD caching and saw a few posts on here (a special shout out to Dennis Wood for his great posts) and was thinking it *may* make sense for me to switch to using a 7 disk RAID 5 with 1 SSD cache drive. I have read that the QNAP systems only do read cache, but even still reach cache can help indirectly improve write performance.

Reality check #2 - is the write performance enhancement I will get on a SSD cached 7 disk RAID 5 volume going to equal our exceed the write performance I would get with an 8 disk RAID 1+0 volume? If so I would rather go the RAID 5 + SSD route as it would give me the useable space equivalent of 6 disks (7-1) versus 4 disks (8-4). I can't really find a detailed performance comparison of these two approaches and I don't want to go the SSD cache route hoping it will work for me and end up hating my decision.

Finally we come to the hard drives. I had originally intended to get the WD RED drives as
I had known about them for a while, but then I saw some comparisons between them and the Seagate NAS drives like this one:
http://www.anandtech.com/show/7258/battle-of-the-4-tb-nas-drives-wd-red-and-seagate-nas-hdd-faceoff/4
Since the drives were roughly the same price, but the Seagate NAS drives (mostly) outperformed the WD Red drives, I started leaning towards the Seagate NAS drives. I decided to do some more research and found that Hitachi had also entered the NAS drive arena. In comparing the performance and power draw of all three NAS drive types, it seemed to be relative to their RPM speeds.
WD Red = 5400 RPM.
Seagate NAS = 5900 RPM.
Hitachi NAS = 7200 RPM.
I was torn between the fastest disks (Hitachi), the most energy efficient (WD Red), or the middle of the pack (Seagate), so I decided to do some research on failure rates. That led me to his article:
http://blog.backblaze.com/2014/01/21/what-hard-drive-should-i-buy/
This makes me want to shy away from Seagate even though I realize the NAS firmware should make the drives more stable than the consumer drives used in that study. I think I am leaning towards the Hitachi drives because of my IOPS concerns, and I think I will just have to eat the bigger electric bill and pray the faster and more energy consuming drives don't make my office hotter.

Reality check #3 - Does anyone see a problem with my logic on which drives to purchase?

Lastly I was leaning towards the 3TB drives because the price per GB between that in the 4TB drives was cheaper on the 3TB drives. The price point delta gets bigger and smaller depending on which manufacturer you look at. And while I hate wasting 1/2 of my drive space with RAID 1+0, *if* I go the RAID 5 w/ SSD cache, the 3TB drives become much easier to swallow in regards to total useable space. For example the Hitachi drives work out like this:
Hitachi NAS 3TB drive = $140. $140 / 3000 = $0.466 per GB
Hitachi NAS 4TB drive = $190. $140 / 4000 = $0.475 per GB

3TB RAID = $140 * 8 = $1120.
4TB RAID = $190 * 8 = $1520
8 drive difference = $400.

The $400 difference might not seem like much, but when I am adding everything up I look at it as $400 I don't need to spend right now if the total useable space with 3TB drives will tide me over for a few years.

Final reality check - any thoughts on everything as a whole that I am thinking of doing?

Thank you for taking the time to read this, and thank you again if you have a constructive comment you can provide.
 
Last edited:
could you be a bit more concise?

I wonder how you're going to backup that beast?
 
Last edited:
could you be a bit more concise?

I wonder how you're going to backup that beast?

I'm an engineer by trade so being concise is just part of my nature. :)

I am going to back up the servers (virtual and physical) and desktops using DPM to an iSCSI LUN. I will then replicate that DPM backup to Windows Azure as I get some free storage there as a part of my job that DPM can replicate to.

As for the non-SCSI data, I am going to look at a couple of options including some other cloud backup solutions the QNAP can talk to. A large bulk of the data will be movies and other data that if I lost I could recreate, so I am not sure I need to back that type of data up.

BTW - I realize I have probably overanalyzed this design to death, but this is a huge personal cash outlay on my part and I want to get at least 3-4 years out of it.
 
Smb3 (QTS 4.1) will perform likely better than iscsi (use jumbo frames) and make your life a lot simpler. You'll need windows 8 or server 2012 to take advantage of smb3 on the client end.

Being an engineer, I'd suggest you read my blog series "confessions of a 10gbe newbie". I suspect this will address a ton of your questions :)
 
I'm an engineer by trade so being concise is just part of my nature. :)

I am going to back up the servers (virtual and physical) and desktops using DPM to an iSCSI LUN. I will then replicate that DPM backup to Windows Azure as I get some free storage there as a part of my job that DPM can replicate to.

As for the non-SCSI data, I am going to look at a couple of options including some other cloud backup solutions the QNAP can talk to. A large bulk of the data will be movies and other data that if I lost I could recreate, so I am not sure I need to back that type of data up.

BTW - I realize I have probably overanalyzed this design to death, but this is a huge personal cash outlay on my part and I want to get at least 3-4 years out of it.
You really NEED 8 bays? Don't buy all those bays for the sake of RAID, I say.

The credo: "RAID is not a backup". These days, data loss is far more likely to be caused by human error, theft, NAS power supply/mainboard than drive failure. Esp. if you run 2+ volumes independently in the NAS, and backup via USB3/eSATA to out of sight drive(s).
 
Last edited:
Smb3 (QTS 4.1) will perform likely better than iscsi (use jumbo frames) and make your life a lot simpler. You'll need windows 8 or server 2012 to take advantage of smb3 on the client end.

Being an engineer, I'd suggest you read my blog series "confessions of a 10gbe newbie". I suspect this will address a ton of your questions :)

I just read through the entire thing (awesome collection of information BTW).

Referencing my original post, I would interpret your findings in that write-up and your other post I commented in as follows. I am putting them in what I believe your perspective is so you can yay/nay them and I can speak to them. :)

#1. Storage enclosure decision - you give 2 thumbs up for the 8 bay QNAP.
No argument there.
#2. NIC configuration - you would recommend going 10Gb, but if not you would recommend teaming all 4 NICs into a single LACP team to maximize throughput.
I hear you. I don't want to plunk down $$$ on 10Gb right now but rather have an upgrade path in the future as I get more $$$. The TS-870 seems to provide that path, so I will go the 4 NIC single LACP route if I end up using SMB3 over iSCSI.
#3. iSCSI configuration - You would recommend using SMB3 over iSCSI for server LUN access and just simplify the architecture.
The lab needs to recreate (as best as it can) customer environments which today are predominantly iSCSI for Hyper-V hosts. I will however run this SMB3 vs iSCSI thought by my co-workers who specialize in all things Windows and see if they think they can steer our joint customer to SMB3 in lieu of iSCSI in the future.
#4. Disk/RAID configuration - You would recommend RAID 5 over RAID 1+0 because you have seen an 8 disk RAID 5 perform really well.
I think I am leaning more heavily towards RAID 5 just so I don't "waste" space in the NAS/SAN. However I think most of the testing I saw you perform was MB/s and not IOPS. Did I miss an IOPS comparison of the different configurations? I am thinking that a 7 disk RAID5 with an SSD cache will provide a serious I/O buffer for the potential IOPS shortcomings of RAID 5 based upon someone else's 6 disk QNAP system test. What do you think?
#5. Hard drive choice - you seem to be leaning heavily on the Hitachi drives in your tests so I guess that is your vote out of the 3 NAS types. In your experience are they loud and/or do they generate a lot of heat?
#6. 3TB versus 4TB - I don't really see anything in your posts other than you seem to lean to the 4TB drives. I think unless someone can come up with a good reason to spend the extra $400, I am going to go down the 3TB route.

Is that a fair summary and would you please be so kind as to address some of my follow up questions?
 
Last edited:
You really NEED 8 bays? Don't buy all those bays for the sake of RAID, I say.

The credo: "RAID is not a backup". These days, data loss is far more likely to be caused by human error, theft, NAS power supply/mainboard than drive failure. Esp. if you run 2+ volumes independently in the NAS, and backup via USB3/eSATA to out of sight drive(s).

I appreciate how your initial responses are very short but you come back and add more meat to them. Thank you for that!

Technically I only NEED a 2 disk NAS for shared storage of my Hyper-V cluster but the performance would be horrible. I wouldn't go for anything less than a 4 bay system because I also want to use the NAS for all of the cool NAS features like storing large caches of software and personal files (my wife shoots a lot of pictures on her DSLR in RAW), and using a lot of the apps QNAP provides. I.E. This isn't just for my lab RAID, and since I would use a minimum of a 4 bay system for that, I would want a 6+ bay system for the entire thing so I could just get 1 of them (versus 2 4 bay systems).

When I compared the prices of the 6 bay system and the 8 bay system, the price difference was a no brainer. That and getting an 8 bay system allows me to future proof it some.

Would I necessarily buy all 8 disks (or 7 with an SSD) day one? Probably not, but I design for the end goal and then buy what I need to get it operational while keeping the end goal in mind/as a target. Trust me, I am not looking to waste money, but I think the 8 bay system is a smart long term investment. :)
 
A more likely scenario is the NAS being hit by an encrypting virus which are only going to get more sophisticated. These arrive via a windows machine with mapped drives (only a matter of time until all UNC references are targeted!!). There will be increasing incentive for hackers to invest in these viruses as ransom decryption keys via bitcoin "tumblers" has meant huge dollars were likely extorted in the first variant which hit back in September. I see this is the number one threat right now given how the first zero-day infections went entirely undetected by many AV packages.

My approach to workstation security (for example, GPO enforced max UAC settings on windows workstations) was changed substantially after September. No more users as local admin on workstations, file journaling via webroot AV etc. NAS and storage space means that enabling the NAS network recycle bin can "save" deleted workstation backup images for as long as space permits. This feature can essentially increase your backup rotation recovery if required. The NAS network recycle bin should always be restricted therefore from access by any networked user ID, local NAS admin account only. Some thought up front like this means a network-aware destructive virus, or hardware complete failure, is not quite as scary..

Rather than fully populating the TS-870 we spread the storage around between NAS (primary backup for our 10gbe file server) and windows 2012 file server. A third (older, slower) NAS provides snapshots to recover from a virus event as described above. Doesn't matter how you do it, but I'd suggest a backup strategy that gives you at least a month recovery window. A backup NAS is cheaper than an LTO tape auto-loader.
 
and to counter things such as human error and an encrypting virus, in my NAS, volume 2 has no shares. It has a time backup of key folders on volume 1, going back several months of file versions. I don't do time backup of big drive images, just keep the last 2 versions.

Zero reliance on RAID.
 
So hyper v does support file storage over smb3..so would explore that.

3tb vs 4tb is just based on my past experience with NAS where I've gone back and upgraded all drives after the fact for more space. This time started with 4tb but left room for expansion on all devices. In the long run, thus approach should be more cost effective. The hitachi 7200s have good reviews online, and we're also well regarded by the Backblaze storage/ failure stats. Four months with 15 drives and zero issues so far. They top out at about 85F, this after 5 hours of backup at ~260MB/s in a 75F ambient room.

With regard to iscsi, I ended up adding a 5TB iscsi "thin" provisioned iscsi LUN on the TS-870. This is managed by the 2012 server as a dedicated backup disc which automatically adds shadow copies and masters according to disk space. A nice find was that server 2012 has built in MCS that "sees" the NAS over both 10gbe connections (not teamed) between the NAS and server. What I have observed is that the bandwidth can then be shared equally over both 10gbe connections, with failover or round robin load balancing. This is configured entirely on the 2012 Server iscsi target GUI. The backup runs at about 260MB/s.
 
Last edited:
So hyper v does support file storage over smb3..so would explore that.

3tb vs 4tb is just based on my past experience with NAS where I've gone back and upgraded all drives after the fact for more space. This time started with 4tb but left room for expansion on all devices. In the long run, thus approach should be more cost effective. The hitachi 7200s have good reviews online, and we're also well regarded by the Backblaze storage/ failure stats. Four months with 15 drives and zero issues so far. They top out at about 85F, this after 5 hours of backup at ~260MB/s in a 75F ambient room.

With regard to iscsi, I ended up adding a 5TB iscsi "thin" provisioned iscsi LUN on the TS-870. This is managed by the 2012 server as a dedicated backup disc which automatically adds shadow copies and masters according to disk space. A nice find was that server 2012 has built in MCS that "sees" the NAS over both 10gbe connections (not teamed) between the NAS and server. What I have observed is that the bandwidth can then be shared equally over both 10gbe connections, with failover or round robin load balancing. This is configured entirely on the 2012 Server iscsi target GUI. The backup runs at about 260MB/s.

Agreed on investigating SMB3 in lieu of iSCSI. I have asked one of the guys at my work who is more focused on Windows and Hyper-V what the performance difference of using a single channel SMB3 connection to the QNAP unit versus MPIO iSCSI would be.

Understood on the 4TB versus 3TB cost in comparison to the long term growth capabilities. I did confirm with QNAP that there will be an expansion tower unit released Q4 that will work with the TS-870. I did not get confirmation if it will require me to replace the 2 x 1Gb add-in NIC and suspect it might. So I think if I went with 3TB disks I would probably be fine for years.

Where is the volume that you are creating the shadow copies for on the QNAP hosted iSCSI LUN? I.E. Is it a local volume in the Windows Server or is it also hosted on the QNAP?
 
Agreed on investigating SMB3 in lieu of iSCSI. I have asked one of the guys at my work who is more focused on Windows and Hyper-V what the performance difference of using a single channel SMB3 connection to the QNAP unit versus MPIO iSCSI would be.

Understood on the 4TB versus 3TB cost in comparison to the long term growth capabilities. I did confirm with QNAP that there will be an expansion tower unit released Q4 that will work with the TS-870. I did not get confirmation if it will require me to replace the 2 x 1Gb add-in NIC and suspect it might. So I think if I went with 3TB disks I would probably be fine for years.

Where is the volume that you are creating the shadow copies for on the QNAP hosted iSCSI LUN? I.E. Is it a local volume in the Windows Server or is it also hosted on the QNAP?

I found a great write-up on multi-channel SMB v3 versus single channel, and all the variations on both:
http://blogs.technet.com/b/josebda/archive/2012/06/28/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0.aspx

I highly recommend this read as it will help explain why the SAMBA folks really need to add multi-channel support (so it will flow down to NAS vendors like QNAP and Synology).

And of course Microsoft marches forward by improving SMB v3 to v3.02 in Server/Hyper-V 2012 R2:
http://blogs.technet.com/b/josebda/archive/2014/03/30/updated-links-on-windows-server-2012-r2-file-server-and-smb-3-0.aspx

I searched this person's blog for a Hyper-V over iSCSI versus SMB v3 and didn't find a head to head comparison, but given the information about single channel SMB v3, I am going to stick with iSCSI MPIO for now as I know that is designed to handle multiple channels per NIC - not to mention it's what most Hyper-V shops are doing (versus using SMB v3).
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top