What's new

what is the network throughput on a Thecus 8800PRO NAS supposed to be?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

SoftDux-Rudi

Occasional Visitor
Hi all,

I recently got a Thecus 8800PRO (our supplier doesn't stock the 8800SAS versions) but I'm a bit disappointed in the network throughput I get from it.

Using SMB I get about 40MB/s transfer, using rsync and only about 9Mb/s using NFS on average.

Is this normal from a SATA based NAS, setup in RAID10?


The Thecus 8800PRO is connected via single 1GB NIC to a GB switch, and then I have an HP Prolaint DL360 with 2x 146GB SCSI HDD's setup in RAID1 configuration. From there I mounted an NFS & SMB share on the HP server and did the following tests:

NFS mount options:
192.168.2.200:/raid1/data/raid-share on /thecus/raid-share type nfs (rw,rsize=32768,wsize=32768=timeo=14,intr,addr=192.168.2.200)


NFS transfers:
[root@HP-DL360 ~]# time dd if=/dev/zero of=/thecus/raid-share/largefile.iso bs=1K count=1M


real 0m1.066s
user 0m0.000s
sys 0m0.000s
[root@HP-DL360 ~]# time dd if=/dev/zero of=/thecus/raid-share/nfs.iso bs=1K count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 51.0509 seconds, 21.0 MB/s

real 0m51.122s
user 0m1.560s
sys 0m7.920s

Using SMB was a bit faster, but not too much:
[root@HP-DL360 ~]# mount.cifs //192.168.2.200/raid-share /mnt/raid-share/
Password:
[root@HP-DL360 ~]# time dd if=/dev/zero of=/mnt/raid-share/smb.iso bs=1K count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 28.0072 seconds, 38.3 MB/s

real 0m28.015s
user 0m2.104s
sys 0m11.565s

Creating a file to local disk:
[root@HP-DL360 ~]# time dd if=/dev/zero of=/smb.iso bs=1K count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 23.9846 seconds, 44.8 MB/s

real 0m24.005s
user 0m2.324s
sys 0m12.529s


Creating a folder in NFS:
[root@HP-DL360 /]# time mkdir /thecus/raid-share/test2

real 0m0.036s
user 0m0.000s
sys 0m0.000s

Creating a folder in SMB:
[root@HP-DL360 /]# time mkdir /mnt/raid-share/test1

real 0m0.019s
user 0m0.000s
sys 0m0.000s




rsync to SMB:

[root@HP-DL360 /]# rsync -avz --progress /*.iso /mnt/raid-share/
building file list ...
4 files to consider
100MB.iso
104857600 100% 32.96MB/s 0:00:03 (xfer#1, to-check=3/4)
10MB.iso
10485760 100% 28.57MB/s 0:00:00 (xfer#2, to-check=2/4)
1GB.iso
1048576000 100% 31.12MB/s 0:00:32 (xfer#3, to-check=1/4)
1MB.iso
1048576 100% 996.11kB/s 0:00:01 (xfer#4, to-check=0/4)



FTP upload to the Thecus:
ftp> put ftp.iso
local: ftp.iso remote: ftp.iso
227 Entering Passive Mode (192,168,2,200,124,115)
150 Accepted data connection
226-File successfully transferred
226 14.233 seconds (measured here), 70.26 Mbytes per second
1048576000 bytes sent in 14 seconds (7.2e+04 Kbytes/s)



FTP download from:
ftp> get ubuntu-9.10-server-amd64.iso
local: ubuntu-9.10-server-amd64.iso remote: ubuntu-9.10-server-amd64.iso
227 Entering Passive Mode (192,168,2,200,120,240)
150-Accepted data connection
150 670220.0 kbytes to download
226-File successfully transferred
226 16.610 seconds (measured here), 39.41 Mbytes per second
686305280 bytes received in 17 seconds (4e+04 Kbytes/s)


Why would FTP download be so much slower? I thought RAID10's read performance would be better than that?
 
As the Thecus N8800 Pro looks to have basically the same hardware as the N7700 Pro I would say it should be capable of around 100 MB/sec read/write. http://www.smallnetbuilder.com/nas/nas-reviews/31120-thecus-n7700pro-reviewed But it does depend on the OS being used on the client and the software being used to test with.

First up realize that dd is most likely only single threaded and uses synchronous IO. Which means it will only issue one IO at a time and wait to hear back before issuing another IO. This works fine for local file IO but over a network the latency is much higher and file IO performance generally goes down due to the extra latency. The way to get around the higher latency is to use asynchronous IO and keep multiple IO requests in flight at one time. I believe Iometer and Iozone might allow for async IO and multiple IO requests at one time. But even with that said if you look at your results you are not that much lower using DD on the SMB test than on the local test. Generally though using small block sizes (less than 16KB) will not show maximum performance. I would recommend testing with 64K or 1MB and see what your results are.

For your rsync results I think it might boil down to the fact that rsync was not designed to have high performance. Just a guess though.

With your FTP results I think you are correct you should be able to see read speeds that are similar to your write speeds. The only thing I know is that the client FTP program can affect transfer speeds quiet a bit. At least this is the case in Windows. The default FTP in windows just plain sucks... I have found Filezilla works much better. Not sure but a different FTP client might give better results for you.

As a disclaimer I should say that I have not used the Thecus unit you are asking about. Also I have limited experience using NFS. Most of my testing has been with SMB on the different hardware I have here at home. So far I have not had much luck getting high speed file transfers using Ubuntu or other linux/unix OSes on the client. Using Windows Vista SP1, Windows 7, or even Windows XP as the client has given me better SMB file transfer speeds.

00Roush
 
Here's an interesting thing:

Mounting the NAS (well, SAN in this case) directly via iSCSI and formatting the partition with ext3 yields the following results:

[root@HP-DL360 ~]# dd if=/dev/zero of=/iscsi/file1 bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 18.4433 seconds, 116 MB/s
[root@HP-DL360 ~]# dd if=/dev/zero of=/iscsi/file2 bs=1G count=2
2+0 records in
2+0 records out
2147483648 bytes (2.1 GB) copied, 18.936 seconds, 113 MB/s
Reading back is a bit slower:
[root@HP-DL360 ~]# dd if=/iscsi/file of=/dev/null bs=1M
file1 file2
[root@HP-DL360 ~]# dd if=/iscsi/file1 of=/dev/null bs=1M
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 26.2147 seconds, 81.9 MB/s
[root@HP-DL360 ~]# dd if=/iscsi/file1 of=/dev/null bs=1G
2+0 records in
2+0 records out
2147483648 bytes (2.1 GB) copied, 27.2585 seconds, 78.8 MB/s


So using iSCSI does improve things a bit, but I would still expect better throughput. And I would prefer not to use iSCSI
 
Last edited:
Well it is good to see you can get better throughput. But it does look like you used 1MB block size instead of 1KB block size like you did in your first tests so it might not just be iSCSI. Have you tried 1MB block sizes using SMB?

I am not sure why you would expect higher throughput... A single gigabit connection is pretty much limited to 119 MB/sec max throughput in a single direction without any overhead. To see higher throughput on a single connection you would need a 10 gigabit adapter in both the NAS (SAN) and the client. If you were using multiple clients you could probably get higher throughput out of the NAS by using both gigabit ports on the NAS. (ie teaming)

00Roush
 
Well it is good to see you can get better throughput. But it does look like you used 1MB block size instead of 1KB block size like you did in your first tests so it might not just be iSCSI. Have you tried 1MB block sizes using SMB?

It was actually a different test :)
I am not sure why you would expect higher throughput... A single gigabit connection is pretty much limited to 119 MB/sec max throughput in a single direction without any overhead. To see higher throughput on a single connection you would need a 10 gigabit adapter in both the NAS (SAN) and the client. If you were using multiple clients you could probably get higher throughput out of the NAS by using both gigabit ports on the NAS. (ie teaming)

00Roush

AFAIK, gigabit NIC's throughput is 125MB/s (source: http://en.wikipedia.org/wiki/List_of_device_bandwidths#Local_area_networks)which I'm not getting at all. So, I would at least like to max out the NIC and would have expected an 8 drive unit to be able todo that. But, I'm getting no where near 125MB/s, let alone 100MB/s.
 
Well it is good to see you can get better throughput. But it does look like you used 1MB block size instead of 1KB block size like you did in your first tests so it might not just be iSCSI. Have you tried 1MB block sizes using SMB?


Running the same tests as my original, on the iSCSI mount yields the following:

[root@HP-DL360 ~]# time dd if=/dev/zero of=/iscsi/largefile.iso bs=1K count=1M
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 19.8628 seconds, 54.1 MB/s

real 0m19.869s
user 0m2.468s
sys 0m15.505s
[root@HP-DL360 ~]# dd if=/iscsi/
file1 file2 largefile.iso lost+found/ testfile
[root@HP-DL360 ~]# dd if=/iscsi/largefile.iso of=/dev/null bs=1K
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 6.79463 seconds, 158 MB/s



[root@HP-DL360 ~]# rsync -avz --progress /*.iso /iscsi/
building file list ...
9 files to consider
100MB.iso
104857600 100% 42.15MB/s 0:00:02 (xfer#1, to-check=8/9)
10MB.iso
10485760 100% 15.20MB/s 0:00:00 (xfer#2, to-check=7/9)
1GB.iso
1048576000 100% 27.68MB/s 0:00:36 (xfer#3, to-check=6/9)
1MB.iso
1048576 100% 6.13MB/s 0:00:00 (xfer#4, to-check=5/9)
ftp.iso
1048576000 100% 28.66MB/s 0:00:34 (xfer#5, to-check=4/9)
hp-dl360.iso
1073741824 100% 26.28MB/s 0:00:38 (xfer#6, to-check=3/9)
largefile.iso
1073741824 100% 28.66MB/s 0:00:35 (xfer#7, to-check=2/9)
smb.iso
1073741824 100% 26.79MB/s 0:00:38 (xfer#8, to-check=1/9)
ubuntu-9.10-server-amd64.iso
686305280 100% 26.26MB/s 0:00:24 (xfer#9, to-check=0/9)

sent 6123130115 bytes received 218 bytes 28950970.84 bytes/sec
total size is 6121074688 speedup is 1.00
 
It was actually a different test :)


AFAIK, gigabit NIC's throughput is 125MB/s (source: http://en.wikipedia.org/wiki/List_of_device_bandwidths#Local_area_networks)which I'm not getting at all. So, I would at least like to max out the NIC and would have expected an 8 drive unit to be able todo that. But, I'm getting no where near 125MB/s, let alone 100MB/s.

I suppose it depends on how you calculate it. http://en.wikipedia.org/wiki/Binary_prefixes I guess I always calculate using MiB.

00Roush
 
So by chance have you gotten a chance to test with a Windows client? If you don't mind me asking what OS is going to be on the clients that will use the Thecus?

00Roush
 
We have a 8800SAS with 4 SATA drives in it (Raid 6). Running Xen server with the vm stored on the NAS via nfs connection we has a read rate of about 80MB/s.
 
If you don't mind me asking are you using any special NFS settings. Also what OS is on the client end?

Thanks,
00Roush
 
If you don't mind me asking are you using any special NFS settings. Also what OS is on the client end?

Thanks,
00Roush

No special settings at all. We are running Xenserver 5.6 (havent upgraded to fp1 yet) on a couple of dual processor 6 core dell servers. The client os is Centos 5.5 64bit with the Xen tools installed. Xen is just configured to use the NAS device as the main storage repository via a nfs connection. No tweaking of the settings has been done and jumbo frames are disabled.

Another VM is running a mysql database containing 4 tables which total about 42 million rows. Every night we import new data. With the storage based on the NAS this took just under 2 hours. When we switched it to using local storage (RAID-1 on a couple of 5k rpm sata's - same type of drives which are in the nas) it now takes about 1 hour 40 minutes.
So the NAS does slow it down a little but not by much. In this setup the NAS was not configured for speed. The main reason why we switched the storage to local discs was because the data is unimportant as it can be reimported. Having the data local would also reduce the load on the NAS incase it was being used for other things at that time aswell and it also gave us a good indication of the speed of the NAS in that setup as we are using more and more NAS devices.
 
Thanks for the info on your setup. Since there are so many linux/unix OSes out there with different implementations of NFS I figured it might help to know which one is working for you. Also if any special settings are needed to see good performance.

00Roush
 
Hello,
I had problems too with NFS performances and the solution was to choose async mode. Client is an openSuSE 12.3 factory PC, using NFS v4.
In async modes, I can reach around 65MB/s (the local HDD is slow : WD Green).
Hopte this can help you.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top