What's new

[HELP] TS-509 Performance

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

v2k

New Around Here
I just setup a TS-509.

I'm seeing transfer rates of ~28 MB/s.

1. What's the best way to benchmark file write and read speeds?
2. How do I determine why I'm seeing slow rates?

The setup:
-DIR-655 DLINK Router
-2 linux boxes (Intel Quad cores with 8 GB RAM) and 509 QNAP with gigabit ethernet via router
-Using nfs file shares (tested samba and speeds seem to be the same)
Drives:
Seagate ST3500641AS 2.AA 465.76 GB
Seagate ST3500641AS 2.AA 465.76 GB
Maxtor 7H500F0 HA43 465.76 GB
Maxtor 7H500F0 HA43 465.76 GB

This transfer rate is the same rate I would see if I was copying from 1 linux box to the other; without the QNAP. I'm assuming it's something to do with my network, but how do I track it down?

ethtool dumps:

QNAP:
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised auto-negotiation: Yes
Speed: Unknown! (65535)
Duplex: Unknown! (255)
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Current message level: 0x000000ff (255)
Link detected: no

Linux box:
Settings for eth0:
Supported ports: [ TP MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: MII
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000033 (51)
Link detected: yes
 
Last edited:
I just setup a TS-509.

and 509 QNAP with gigabit ethernet via router

This transfer rate is the same rate I would see if I was copying from 1 linux box to the other; without the QNAP. I'm assuming it's something to do with my network, but how do I track it down?

Most likely it is your NICs in your linux hosts. Get proper Intel Pro server NICs. I see reports that the Dlink can get around 90MB/s so in theory it is capable of the speeds, however remember the "good fast cheap" equation. You get to pick any two.

I would personally put a (good, fast [but not cheap]) wire-speed switch for all of the hosts to connect to if you want to try and achieve enterprise network speed.
 
Ah, that might make sense.

I'm using a GA-P35-DS3R motherboard which has a:
Realtek 8111B chip (10/100/1000 Mbit)

I ran bonnie++ just on the linux host and it seems to be really slow itself (if I'm reading this right, I'm not too familiar with this stuff, I'm not a hardware guy):

http://novuscom.net/~vjewlal/bonnie.html

I'm running it over the nfs right now to compare...
 
So I tried copying 7.5G on the QNAP. I copied from one folder to another.

It took
real 6m13.180s
for
7812192 k

So, that's :
(7812192 / (6*60+13.18) )/1024
= 20.443469 MB/s

Does that seem reasonable? Seems really slow?
 
You first need to test your network connection. A gigabit connection with current-generation computers with PCIe NICs on both ends should be able to support close to 100 MB/s TCP/IP (Max theoretical is 125 MB/s). If you have PCI NICs, you will get closer to 60 - 70 MB/s.

You said "This transfer rate is the same rate I would see if I was copying from 1 linux box to the other", so let's find what's limiting you there, first. It's best to do this with something that doesn't get your hard drives involved, since they could be limiting performance.

Best tool to use is iperf/jperf. This looks like a decent tutorial for iperf on Linux. Use a buffer length of 32-64 KB instead of the 8K default or you'll get low readings.
 
I tried playing with various window sizes, but it didn't seem to make a difference. The maximum here is 1000 Mbits/sec?

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.0.100 port 5001 connected with 192.168.0.102 port 52350
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 817 MBytes 686 Mbits/sec
[ 4] local 192.168.0.100 port 5001 connected with 192.168.0.102 port 52356
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 727 MBytes 610 Mbits/sec
[ 4] local 192.168.0.100 port 5001 connected with 192.168.0.102 port 52359
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 734 MBytes 616 Mbits/sec
[ 4] local 192.168.0.100 port 5001 connected with 192.168.0.102 port 52360
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 736 MBytes 617 Mbits/sec
[ 4] local 192.168.0.100 port 5001 connected with 192.168.0.102 port 52363
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 725 MBytes 608 Mbits/sec
 
I suggested changing buffer length, not window size.
Anyway, those numbers are typical for a gigabit connection with a PCI-based NIC on either or both ends.

At any rate, they show that the network connection isn't limiting you.

To get the drives out of the equation you need to use something like iozone.
 
Yeah, I couldnt find any buffer options in iperf.

Are the bonnie++ benchmarks of no use?
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top