What's new

My DIY NAS performance (Ubuntu)

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Fanstastic! Congratulations on your excellent speeds.

Would you share your tuning tips for Ubuntu 9.04 server?
 
I doubt all of you have got max out your NAS, unless you have super fast machine on client side. Here is how I tested my NAS

NAS side
Core 2 DUO 2.8G, 4G RAM
6 1T Seagate 7200.12 RAID10

Bonnie++ test, sequential block input(read) is about 300M/s, write 220M/s
chunk=64k layout=n2

Version 1.03c ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
hua-desktop 8G 81413 97 222117 32 66572 12 82556 97 306119 39 340.3 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 11716 34 +++++ +++ 8783 28 11249 32 +++++ +++ 8181 26

I use "iperf" to test the raw network performance. It has to be run on both sides, one as server, the other as client. The standard test only involve RAM sub system, cpu, network card/bus, cable and switch/hub if there is any, as Iperf general the test data in RAM and send it to server side via network and measure the speed. So any bottle neck in the sub system above on both client and server will limit your max speed. Here is result

Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.1.5 port 5001 connected with 192.168.1.1 port 54482
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.10 GBytes 940 Mbits/sec

My best result is about 993Mbits/sec, depends on cable length, etc. Please note the standard test doesn't involve disk sub system, the data flow


RAM -> CPU -> NIC -> Cable/Switch -> NIC -> CPU -> RAM

Further, Iperf can use a local file as input instead of random data in memory. Therefore I generated a 30GiB file on my NAS and use it for test

------------------------------------------------------------
Client connecting to 192.168.1.1, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.1.5 port 60789 connected with 192.168.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-276.4 sec 30.0 GBytes 932 Mbits/sec

About 111.14 MiB/s is the max speed my NAS can achieve when disk sub system is involved.
 
Hello guys,

I am trying to increase my NAS I/O numbers. My setup is:
Ubuntu (server side) with Iozone(to measure performance) and Samba
WIndows XP(client side) connected to my NAS 500GB hard-drive with Samba.

I have done some testing and the write and read speeds are 23MB/s and 32MB/s respectively. I know these numbers are very low and have been trying to tune smb.conf file for quite some time now. I have played with all options like socket options, max xmit, use send file and RCV_buffersize and SND_buffersize.........but maybe I am not hitting the right numbers or the problem lies some place else.

I have tried finding solutions and have heard a lot of options like using Jumbo transfers, increasing MTU or just investing in better processors........it seems you guys have a lot of experience in this and would appreciate if I could get some opinion.
If you guys need any specific information.....please let me know and I will be glad to provide it.

Thanks.
 
I will first try to use iperf to find out your network throughput. It is packaged for ubuntu. You need google and find precompiled binary version for windows. To test the your network, iperf has to be run on both sides. One iperf acts as server, the other acts as client.

I suggest run "iperf -s" on you windows and "iperf -c <your window's ip address>" on your NAS. Test a couple times to get average.

Next create a very big file 10GiB on your NAS using
dd if=/dev/zero of=verybigfile bs=1G count=10

Run again
iperf -c <your window's ip address> -F verybigfile

This will show your max possible throughput when disk subsystem is involved. Btw, can you also post your hardware
 
Final speeds after tuning NAS and client, Ubuntu 9.04 64-bit with Linux RAID-5:

raid5.jpg


Reading from NAS:
109mbread.jpg


Writing to NAS:
82mbwrite.jpg


Reading one single large 8.1Gb HD mediafile from NAS:
8gbx.jpg



Disabled all NAS onboard hardware in BIOS except PCI-E LAN.
Ubuntu 9.04 64-bit Server minimal install, mdadm and samba only.
Client running Vista x64 SP2 on RAID 0.
Gigabit switched network with CAT6 wiring.

Can you give us more information about the ubuntu tunning?

Thanks in advance

Best Regards
 
Can you give us more information about the ubuntu tunning?

Thanks in advance

Best Regards

Hi,

Maybe it's a combination of many factors so here's an overview:

NAS Hardware:
  • Intel DG965OT Motherboard with 6 onboard SATA ports.
  • Intel Celeron Dual Core 2 Ghz.
  • 4 Gb RAM
  • Onboard PCI-Express Gigabit NIC.
  • Promise TX2300 Controller for 1 dedicated 2.5" OS harddisk.
  • 6 * Samsung Spinpoint F1 1TB 32 Mb cache all on onboard SATA ports.

Ubuntu / Linux Raid / Network:
  • Disabled all NAS onboard hardware in BIOS except PCI-E LAN.
  • Ubuntu 9.04 64-bit Server minimal install, mdadm and samba only.
  • Gigabit switched network with CAT6 wiring (Netgear WNR3500 Gigabit Router).
  • Linux Raid chunck size 64 K only.
  • Linux ext3 filesystem.

On the Linux side turn off update access times for both the array and the mount folder:

aulat.jpg


------------------------------------------------

aulat2.jpg



The things I found most directly effecting performance:
Chunck size of the Linux RAID array, you can adjust chunck for faster read or faster write needs, 64k I found best of both worlds, otherwise choose 256k. And the number of disks in the array, the more the faster. Really matters! I initially started with 4 disks, after adding the 5th disk things got faster, after adding the 6th disk things got really fast.

So, how fast is your client, is your network Gigabit switched (with perhaps CAT6 wiring), are all disks on onboard SATA ports, do you have onboard PCI-Express NIC, how many disks are in the array, how much cache is on your disks, are the disks themselves hardwarewise fast enough (e.g. 5400 or 7200RPM), what chunck size used for the array, how much RAM does the NAS have, is your NAS OS on a seperate dedicated disk etc. Obviously there are many :( factors that effect the final speed you get.

Maybe in general the plain disk performance is under-estimated, on high-end commercial NAS and SAN systems there is a well known split between Fibre Channel (fast and expensive with high I/O) and SATA (cheap for e.g. archiving) disks so what disk do you use and how many in an array does make a difference.

hua_qiu's comment makes a lot of sense I think:
I doubt all of you have got max out your NAS, unless you have super fast machine on client side.

And his results are extremely good with 6 * 1T Seagate7200.12 RAID10 disks, 4Gb RAM and a Core 2 processor.

Whenever you read from or write to your NAS, you don't only test your NAS speed but also your client and network speed, any of those 3 can be the bottleneck. XP e.g. is much slower than Vista when pumping data to/from samba- or network shares. XP SP2 is faster than XP SP1, and to my surprise Vista SP2 seems a lot slower when writing to NAS than Vista SP1.
 
Last edited:

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top