What's new

TS-870 Pro Performance Tests over 10GbE

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Dennis Wood

Senior Member
Here are a few results using the QNAP TS-870 Pro in RAID5 mode with 8 x 4TB Hitachi 7200 RPM drives (about 28TB usable). Note that queue depth in the ATTO tests is 4, however a single file/copy would not result in 1200MB/s from the NAS..it's more in the 700MB/s area. The ATTO tests show iscsi results as well as SMB3 (standard file share). Jumbo frames are enabled on both NAS and Windows server 2012 test machine. RAM was limited to 2GB in the NASPT tests, so those results are accurate.

RAID 5 write results are better than I expected, demonstrating speeds that likely rival the RocketRaid 2720 RAID card in the windows server. The setup that we end up will include both a Windows 2012 Server, as well as the TS-870 PRO, connected to our design/video/graphics 10GbE workstations via Netgear's 8 port switch.

In RAID zero with 7 drives, write speeds were just about 800MB/s so the hit for parity in RAID5 is suprisingly low. The reason you see a plateau at 1200MB/s in the ATTO tests is because that's the limit over a single 10GbE connection. This correlates nicely with the NTTTCP tests on windows clients where I'm seeing 1180MB/s max.

smb3vsiscsi.png


ts870_naspt.png


Curious to see how SSD caching might affect results, I set up a 7 disk RAID 0 array, and tested with and without SSD cache enabled. There is an option in the SSD cache interface to bypass the cache for files larger than 16MB, which I toggled on. The results show this effect.

ts870_ssd_cache.png
 
Last edited:
Running some more tests before the TS-870 Pro completely replaces an older TS-509, these results were interesting. You'll note the transfer speeds shown here during a workstation backup to a network share are technically not possible over a single 1GbE connection, however that is what the software reported..132 MB/s. This is the highest I've ever seen reported by this software over 1GbE. The workstation is question is connected to an HP 48port switch, connected via a single CAT5e cable to the XS708 10GbE switch, then on to the TS-870 Pro via 10GbE. I'm pretty sure this speed has a lot to do with SMB3, as SAMBA on the NAS supports this, and the Windows 8.1 workstation also uses SMB3.

Many of our workstations will remain 1GbE only, so this test bodes well as we image each workstation daily.

1gbe_record.png


Cheers,
Dennis.
 
iSCSI vs SMB3 tests. The target was the NAS, and host was a Windows 2012 R2 server. If you have any questions and/or tests you'd like to see...ask now. Once the NAS is in production, there won't be any more testing :) For the first time in 25 years of experience, I can say that affordable over-network video editing has arrived. Finally.

smb3vsiscsi.png
 
Curious to see how SSD caching might affect results, I set up a 7 disk RAID 0 array, and tested with and without SSD cache enabled. There is an option in the SSD cache interface to bypass the cache for files larger than 16MB, which I toggled on. The results show this effect.

The results with the SSD cache show a 25-40% performance improvement. That is quite impressive. Also interesting to see that the files above 16mb were indeed properly filtered.
 
I suspect that the effect of SSD write cache would be even higher with a moderate multiuser load. I have no doubt that before this NAS is end of life, we'll be populating it fully with SSD drives not just cache. Staggering to think how much power that would save globally...
 
I wish I had seen your thread here before I created this RFC:
http://forums.smallnetbuilder.com/showthread.php?t=17211
Because it looks like you are already ahead of me.

I would appreciate your insight into what I am proposing to do with the 4 NICs and also your thoughts on the RAID configuration choices.

Based upon your results here, granted I am not that familiar looking at I/O tool you are using, it almost looks like SMB3 outperforms iSCSI. Is that your understanding? If so I might rethink what I am doing and just through all 4 NICs at a single production NIC team and use the new capabilities of Hyper-V to use VHDXs stored on a CIFS share versus an iSCSI LUN.

Beyond that does the SSD cache drive do write caching? I thought I read it only does read caching.
 
A "new" addition to QTS 4.1 is the virtualization station...so upgrade the ram in a TS-870. (2x 8gb ddr3 1333 SODIMM ) to 16 Gb and run your vm from there. It's a pretty slick setup.

2 NICs are standard, adding two more is optional. If you are ok in extra $$ for a 10gbe card, you can direct connect two workstations. In the production environment here is the nic assignment we're using:

Ts-870 pro
1GbE Nic 1: dedicated to VM (running on NAS) server 2012 r2 BDC
1GBE nic 2: dedicated to VM on NAS hosting windows 8.1 workstation (for remote VPN workstation)
10gbe nic 1: LAN (netgear 8 port 10gbe switch)
10gbe nic 2: direct attach to server 2012 for backup over 10gbe. Rsync (using Deltacopy on server side) as well as server 2012 backup runs over this connection.

TS-470 pro (offsite backup, home office)
1gbe nic1 - LAN
10gbe nic1 - direct attach to windows workstation.

Direct attach means no 10gbe switch required. Teaming will do nothing for network speeds on single connections. Microsoft smb3 multichannel will however increase single connection speeds...but you need Microsoft windows 8 or server 2012 to see this. Read my blog series "confessions of a 10gbe newbie" one article discusses the difference between SAMBA (NAS) SMB3 and MS SMB3. The TS-870 pro fully loaded with 4tb 7200 rpm will do reads at 730 MB/s. SSD caching is for reads and would only make sense where multiple users are reading the same data consistently. If writes are occurring at the same time, this read caching would definitely help. Rather than using an SSD cache, I installed an SSD in the TS-870 to host my virtual machines on. This way the drive array can spin down for power savings, while the VMs idle on the NAS. If I put the VMs on the raid 5 array, they would keep the drives in the array spinning 24/7. I'm using RAID 5 x 5 drives on the TS-470, one bay for the SSD and 2 left for expansion. Our prime file server for video is a server 2012 box with six x 4tb drives in raid 5, so good for 900mb/s reads over MS SMB3 :)
Cheers, Dennis.
 
Last edited:
A "new" addition to QTS 4.1 is the virtualization station...so upgrade the ram in a TS-870. (2x 8gb ddr3 1333 sodium ) to 16 Gb and run your vm from there. It's a pretty slick setup.

I did see that as an awesome new feature, but unfortunately I need to run a Hyper-V cluster as a part of my job. I will be using this as a lab for not only the VMs running in it, but the Hyper-V cluster itself will be a part of my lab.
 
In that case, smb3 vs iscsi is likely going to work better for you. You'll find that mounting smb3 shares from hyper v is pretty seamless. I did testing with VMware hypervisor booting from a flash drive and had zero issues mounting the VM image from an smb3 share. In a windows 8/server 2012 environment I found SMB3 much faster than smb2. You would need to load QTS 4.1 for this, downloaded from QNaps forum ..login required. You want the April version.

I did not test Hyper V, but you can check QNAP forum for other users. Qnap does have info on certification: http://www.qnap.com/useng/?lang=en-us&sn=2346 not sure if smb3 is an option or not to load the vm ..works well under VMware though.
 
Last edited:
In that case, smb3 vs iscsi is likely going to work better for you. You'll find that mounting smb3 shares from hyper v is pretty seamless. I did testing with VMware hypervisor booting from a flash drive and had zero issues mounting the VM image from an smb3 share. In a windows 8/server 2012 environment I found SMB3 much faster than smb2. You would need to load QTS 4.1 for this, downloaded from QNaps forum ..login required. You want the April version.

I did not test Hyper V, but you can check QNAP forum for other users. Qnap does have info on certification: http://www.qnap.com/useng/?lang=en-us&sn=2346 not sure if smb3 is an option or not to load the vm ..works well under VMware though.

I just had a random thought. Since the QNAP implementation of SMB3 is not multi-channel like Widows 8/2012, do you think there is a potential for a performance bottleneck of running 4+ VMs off that share. If it was multi-path I would say no, but since that isn't the case I'm worried I might choke the pipe with the I/O of multiple VMs. If non-multi-channel SMB3 is a concern in this scenario then I will probably just stay the tried and true iSCSI to be safe since the performance difference wasn't night and day.

Again just random thought.
 
There would be two strategies you would test. One would be to assign IPs to each NIC on the NAS and load them up appropriately, or enable link aggregation (there are 7-8 choices in the NAS GUI including 802.3ad LACP), depending on your switch capabilities. You can bind the 2 NICS that are standard, or add 2 more via the PCIe slot and one or both of them as well.

The NAS has excellent status monitors for bandwidth in the GUI, so you will be able to quickly assess the best strategy for load balancing. We generally have used LACP in the past, however the 2nd NIC in the group rarely sees any traffic. With 10GbE, I set up a 2 port tunnel between our 48port switch, and 10GbE 8 port switch so the 1GbE clients will not see a single uplink bottleneck.
 
Last edited:
Similar threads
Thread starter Title Forum Replies Date
M QVR PRO setup-minimize data usage/storage QNAP 1

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top