What's new

How To Build a Really Fast NAS - Part 4: Ubuntu Server

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

M

madal

Guest
I haven't seen anywhere whether or not you tried NFS vs. CIFS/Samba. I, personally have not experimented with it, but the geek boards I frequent swear it is faster/more efficient.

Just throwing it out there........

Madal
 
No NFS. I am not playing with FTP either right now.
 
compare the "pro's"

Have you thought about comparing with the free tools available ?
Like FreeNas and OpenFiler ?

I have done a similar system, hating to throw out perfectly good old computers, I've built a small San for my small company.

We use an old enterprise class quad xeon Compaq Server with 4 Gigabit cards and 2 professional Raid controllers.
The system has 12 internal drives and 2 libaries of 18 disks, all 10K 36g SCSI disks.

With openfiler we found that it performs better as an I-SCSI target for our wmware cluster as 48 single drives in software raid performed by openfiler, than using the internal raid controllers.

We did some testing with both raid 10 and 5. In raid 10 theres little difference, but using raid 5 theres allmost 25% better performance from using sw raid.

I'd be interested to see if you would gain anything from openfiler / freenas.

Jeppe, Denmark.
 
Raid 5?

I'd be very interested in seeing RAID5 stats for the Ubuntu box.

If you are ever able to setup run those tests I'd love to see them.
 
Have you thought about comparing with the free tools available ?
Like FreeNas and OpenFiler ?
Well, Ubuntu Server is free. I ran both FreeNAS and Ubuntu on the Atom NAS and found Ubuntu had better performance.
Checking out OpenFiler is on my To Do list.
 
What about a trouble-free NAS?

Tim,
I guess there are a lot of casual buyers/users out there who do not exactly need the performance edge that RAID is supposed to bring. And merging devices at block-level, like with RAID or LVM, requires -should require- continuous system supervision, that's a drag.

I am experimenting with a low-power, low-performance :), low-maintenance, big iron home server that uses union mounts (aka aufs under linux.) Union merges the contents of directories with the same name, it works at filesystem-level. Although stacking rw drives is not the primary intent of union, it seems to work quite well (performance, overhead, fault-tolerance.) The box has 2 data volumes, the largest being made of 6 dissimilar drives.
 
I am experimenting with a low-power, low-performance :), low-maintenance, big iron home server that uses union mounts (aka aufs under linux.)
I guess there are some home/SOHO users who don't care about performance. But with the large media filesizes that many people have today, a slow NAS gets old, real fast.
 
Hi Tim,

Love this really fast NAS series! I'm adding my vote to the Nexenta test. There's a lot of buzz these days about how OpenSolaris would be our bestest favourite if we'd all just give it a fair shake. Also, There's a lot of storage talk about how zfs is supposed to be the Be-All and End-All of storage. It would be great to see it along side these others.

Also, just for something a little out-of-the-ordinary, have you considered testing UNRAID by Lime Technology? It sounds a little like some of the stuff you saw on WHS but done with a linux base. It's just different enough that it might be worth while to put on here.

And one last item - I realize this gets out of the simple single task NAS realm, but it would be really great to see you run a Windows 2008 Server test on this setup as well. I did some prelim testing with Win 2008 and was blown away by the performance I saw. Didn't have time to do it more rigorously, but I've a suspicion you might get close to your 100megabyte/sec speed with Win2008 Server. I'll admit it pains me to even say this, being a die-hard linux guy, but there it is.
 
Checking out OpenFiler is on my To Do list.
I must say I'm quite looking forward to this, since I wasn't able to find a single OpenFiler review online yet.

Corndog, UNRAID has already been reviewed on this site, see here. On the other hand, it might be worth it to re-review it and compare the numbers. :)
 
Fantastic series

Good afternoon,

Thank you for a very informative series on building a NAS. I was curious to see how Ubuntu would perform because I am a fan of that OS. I will be interested to see your results with Vista because Microsoft has updated the SMB to a version 2.0 (also in Server 2008) and if you run a connection with Vista to another Vista (or Server 2008) box you should have better network performance.

As a reference I am including one of the articles discussing it:

http://blogs.msdn.com/chkdsk/archive/2006/03/10/548787.aspx

Best regards,

T.R. - Milwaukee, WI
 
Jalyst and corndog,
I'm not real hot on looking at Nexenta anytime soon, since my trusted advisor on all things network filesystemed, Don Capps, didn't have very nice things to say about it.
 
And one last item - I realize this gets out of the simple single task NAS realm, but it would be really great to see you run a Windows 2008 Server test on this setup as well.
I might look at Server 2008. But it really begs the question of how much people are willing to spend for > 70 MB/s throughput, which is where I seem to be stuck so far.
Before I go down that path, I will look at Vista to Vista performance, which I suspect will provide high performance without the complexity/cost of Server, but with a 10 simultaneous connection limit.
 
More on iSCSI?

Hi Tim,
first of all, thanks for all the great work on NAS research and the other stuff too!

I assume most of the optimizations needed to hit your 100MB/s target are on the server side, rather than protocol efficiency and/or quality of the TCP/IP stack. Things like quality of SATA drivers, and software RAID overhead.

But the topic of iSCSI and whether it is worth considering for home applications is one thing I'd like to understand better. Most Music/AV appliances will only talk CIFS, so perhaps it is too computer centric to be of much interest to others.

As a side data point, my file server is an Intel E2160 (1.8Ghz) on a P35 motherboard with 8 SATA ports, 2GB ram, and four 750GB SATA drives, running OpenSolaris build 77 and ZFS RAIDZ1 across the drives. I mainly access it via SSH (scp at the command line), and from a high-end Mac can get around 25MB/sec. I've found many Mac and Windows GUI front ends for SSH (like CyberDuck for Mac and WinSCP for Windows) to really reduce throughput vs. the command-line tools on Unix. On Windows WinSCP, I can only get around 5MB/sec at best on an Intel E8400 client! Obviously SSH adds a lot of computational overhead, so I should probably upgrade to a later OpenSolaris build to try their new integrated CIFS server (i.e. not Samba) and/or experiment with NFS/iSCSI.
 
raid controller

Is there a reason why you're not using a hardware raid controller? I suspect performance would increase if you did. I've been looking at running Ubuntu server but if 60MB/s is the best it can do, then what's the point. I would just go buy something off the shelf then.
 
Is there a reason why you're not using a hardware raid controller? I suspect performance would increase if you did. I've been looking at running Ubuntu server but if 60MB/s is the best it can do, then what's the point. I would just go buy something off the shelf then.

In some recent file copy tests I have done at home between a computer running Ubuntu Server (1GB RAM) and the other running Windows Vista SP1 (4GB Ram) I have seen read speeds of around 90-110 MB/sec only using single disks in each machine. So from what I can tell Ubuntu Server OS itself is only limited by disk speed. Each machine utilized a single drive that has been benchmarked at around 120 MB/sec. The weird thing is that this is not consistent with my Iozone results. Iozone results using a 64k record size show read speeds at around 60 MB/sec for file sizes under 1GB. 1GB to 4GB sizes drop to about 55 MB/sec.

00Roush
 
Is there a reason why you're not using a hardware raid controller?
I'll get to RAID controllers eventually. I didn't start there because only full hardware-based controllers are going to increase throughput and they add $200 - $300 of cost.
 
I have seen read speeds of around 90-110 MB/sec only using single disks in each machine.
This must be for smaller file sizes? There must be some caching going on somewhere since single SATA drives, once you exceed any local cache are limited to 40 - 70MB/s for data coming directly off the platter.
 
This must be for smaller file sizes? There must be some caching going on somewhere since single SATA drives, once you exceed any local cache are limited to 40 - 70MB/s for data coming directly off the platter.

File sizes were from 1.2 GB to 2.63 GB. 4 files for a total of 7.61 GB.

I would say that some caching (buffering) is going on. This would be the only way you could keep the network pipe full.

From my understanding Windows reads from the disk into memory then the data is read directly from memory to the network. Different windows versions have different network buffer sizes but in general I don't think the buffer size is larger than around 16 MB. (Vista?) Reads from the disk will happen as fast as possible until this buffer is full then reads will be throttled to match how fast data is being written from the buffer to the network. So normally file copies using SMB would not saturate the local cache on the machine that is being read from. From what I recall most linux distributions buffer in a similar way for SMB/CIFS file transfers. In testing file copies I have found that only smaller file sizes actually decrease performance. This decrease is due to the hard drive having to do random reads of many files instead of synchronized reads of one file.

Something else to consider is that Windows deals with local file copies different than network file copies. Most Windows versions do not use client cache for SMB/CIFS file transfers. What I mean by this is that I can copy a 256 MB file from the server to the client many times but the file will always be read from the server not from the local client cache. The only time files will be read from the client cache is if offline caching has been enabled. (client side caching) Files will be always be cached on the server though. So when I do copy the same file repeatedly it will be read directly from memory on the server without having to access the disk.

You mentioned that once local cache is exceeded single SATA drives are limited to about 70 MB/sec. Is there anyway you could run a Iozone on your server's local drive? Preferably one of the Western Digital VelociRaptor used in this review. For example I tested this on the same computer I had Ubuntu Server (1GB RAM) installed on except I used Win XP PRO as the OS. Command line I used was... IOZONE -Rab c:\results -i 0 -i 1 -+u -f C:\test\test.txt -q 64K -y 64k -n 512M -g 2G -z
Here is my results:
Writer Report
64
524288 609915
1048576 238240
2097152 110916
Re-writer Report
64
524288 837868
1048576 153984
2097152 138366
Reader Report
64
524288 1050065
1048576 92405
2097152 88569
Re-reader Report
64
524288 1077876
1048576 101125
2097152 95892

All of this is purely from my own testing and understanding of network file copies. I am by no means an expert on this stuff so feel free to correct me.

00Roush
 
Just use file sizes well beyond your workstation or server RAM sizes and the cache effects will disappear. For large file tests we use 200 to 400GB file sizes. No test (over about 2 months now) has resulted in better than 68MB/s large file reads from a single drive over gigabit. That said, 100MB/s is not terribly difficult with RAID0 and Vista SP1 at both ends. The QNAP TS509 in RAID 0 mode with 5 drives will nearly saturate a gigabit connection for both reads and writes..again assuming you're using RAID 0 and Vista SP1 on the workstations. If you just copy 2 to 4 GB files at a time, and average the results, you'll see large inaccuracies as the first 1GB is coping over at very high speeds until the RAM cache is exhausted. Also, Iozone is not handling load balanced connections which may be due to my use of the program...it under reports substantially over real world measured results.

Vista reports large file reads (RAID5) from the same NAS at 97 to 100MB/s with network load balancing enabled on an LACL capable switch, and RAID0 on the workstations. So really, the only challenges at this market level are getting RAID5 writes over gigabit approaching 100MB/s, and doing this potentially with several workstations at the same time.

We've got an Adaptec PCIe RAID card inbound (about $300) to see how well that works for RAID 5 writes over the LAN, and certainly Ubuntu (and the rest of the Linux variants), Vista, Server 2008 etc will have no problem handling the higher data rates.
 
Last edited:

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top