What's new

Why Cache Matters in NAS Performance

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

corndog

Regular Contributor
This article is interesting, especially because it uses NFS as its example. I was doing some reading today about the differences between NFS v2, 3, and 4. It seems that NFS, especially the older versions, are very different from other file service protocols that we're used to, like smb or afp.

The way it's described in the nfs documentation is that nfs is a "stateless" protocol. In comparison, smb is "stateful".

Ever notice that in Windows you can go on the file server and look at a list of all the clients connected to your file server, including what shares are connected, what files are open, and how long they've been idle? You can do the same thing on samba using the swat web interface. All of this information is available to you because smb is "stateful". The server and client both maintain a long-term awareness of what the client is doing with it's connection to the server.

NFS is a different beastie entirely. Ever notice that you can not produce a list of connected clients and what files they have open like you can in swat with smb? I'm getting the idea that by "stateless" it means that the server actually has no idea. Client access to files on the nfs server are very very short term events. The file is opened, read or written, and then closed quickly. Actually, for large files, this event is not even for the whole file. It is for much smaller pieces of the file (with a limit of about 1MB, which might explain the results found in this article) No longer term knowledge of this event or its relation to future or competing file operations is maintained.

Because of this, it is much more important with NFS to make sure writes are committed quickly because there isn't really an orderly tracking of who is accessing what. This is why the official NFS specification requires that all disk writes are done in "sync" mode. This means the server and client are forbidden to use cache AT ALL. The server must confirm to the client that the saved file is actually written out to disk before the event is finished. Linux nfs services have the "async" flag that can be used in the export file, but this is technically a violation of the nfs spec. I guess the only way to get around this is to build aggressive caching into the server file system, but I'm not even sure if this would help, unless the file system "lies" to the nfs service about really writing to disk.

After reading the http://nfs.sourceforge.net page top to bottom carefully, I got the overall impression that nfs is a very exact and paranoid protocol, designed for near-absolute data integrity, NOT for performance. Heck, with version 2 in sync mode, you can reboot your nfs server in the middle of operations, and nothing will be lost. The only fool-proof way to get high performance out of it is to get really really fast disks

But, this is the real world, and my server is protected by a UPS. To all intents and purposes, it doesn't go down. I think some caching can be used in the right way, based on this fact.

NFSv4 is supposed to be stateful, like smb, and as such should be quite different.

Correct me if I've misunderstood nfs at all in this ramble.
 
Last edited by a moderator:
Aaaaanyway,

What I was trying to get to in the previous post is the idea that it might be "wrong" to complain about caching "not working" when using NFS. There are several major configurations of nfs - even default configurations - that actually prohibit the use of cache.

This "anomaly" found in the graphs, especially in the Solaris client (didn't Sun actually create nfs, by the way?) might be explainable with "functioning as designed"...
 
Cache and performance testing.

Don, first of all, the iozone program (your baby!) is doing a great job in terms of matching up to our measured file performance, provided the command line is given correctly. It's a wonderfully thought out little application. This particular article is very timely. In testing the Intel SS4200 vs the QNAP TS509 one thing becomes very clear.

1. The TS509's actual RAID5 performance is much, much lower than the testing would indicate. Why? The unit has 1GB of memory, and the test uses a 1GB file transfer ... as does Qnap's literature. So when you write 1 file under 1GB, the write performance is near 120MB/s measured likely as it's going to the NAS memory...not the disk array! As soon as you exceed this, the performance drops to 10 MB/s. For large files, that's pretty disappointing given it's performance with a single drive...much quicker.

2. The Intel box on the other hand churns away with large files at about 34MB/s and that's using a 5.3GB test set.

So althougth they look the same in the 1 GB file transfers, the Intel boxes RAID 5 performance in our tests is 3X faster than the Qnap unit...for a fair bit less $$.

Conclusion? Tim, it might be a good idea to change your iozone numbers command parameter from 1G to 2G to test performance of these units well beyond cache.

Cheers,
Dennis Wood
 
Tim, it might be a good idea to change your iozone numbers command parameter from 1G to 2G to test performance of these units well beyond cache.
Already doing this in the WHS testing that I'm doing now and in the TS-509 Pro testing that I did for Fast NAS Part 2.

That's also why my iozone test machine has 512MB of memory.

I didn't do this before, since it took long enough to run the tests up to 1 GB with most NASes!
 
Hello all,

The comments about 1G memory limit make a lot of sense, but let's keep in mind that this article we're discussiong shows a large fall-off after 1 MEGABYTE, not 1 GIGABYTE... There's something else at play here.

I have a TS-509 on order, and am a little concerned about how slow it's going to be when writing large video files over a gigabyte to it. However, for the "other" uses I also have planned for it, such as 5 or 6 people working on smaller files on this NAS as primary storage, it is looking good. Word and Excel files are far less than one gig in size, so this NAS should just smoke for that kind of stuff...
 
Tim, good news. The NAS reviews got me rather pumped about the TS509 and right now I'm wondering what we'll do with it...certainly can't use RAID5. RAID 0 is great but one failure and the whole array is toast.

CD, from our testing so far, using the TS509 in RAID5 for files over 1GB will get you write speeds under 14 MB/s, and read at about 57MB/s. Files under 1GB will write at over 100MB/s providing you're using a workstation with a RAID0 array, or similar,, PCIe gigabit and SMB2 (Vista) that can deliver data at that rate. The Intel SS4200 using RAID 10 tested at sustained large file write speeds over 51MB/s and read at 72MB/s. The downside there is that you need 4 x 1TB drives to end up with a 2TB NAS.
 
Last edited:
Tim, good news. The NAS reviews got me rather pumped about the TS509 and right now I'm wondering what we'll do with it...certainly can't use RAID5. RAID 0 is great but one failure and the whole array is toast.

I can't remember (too many threads on this!), but did you try RAID 1 on the QNAP?
 
I know, the information is rather discombobulated right now...but I'll wrap it all up nice and concise in the review :)

I did not try RAID1 on the 509, but did use RAID 10 on the Intel box.
 
This "anomaly" found in the graphs, especially in the Solaris client (didn't Sun actually create nfs, by the way?) might be explainable with "functioning as designed"...
[Posting for Don Capps]
NFS V2 has sync writes, NFS V3 has async writes, and NFS v4 has async writes and even file delegations. (All the article examples used NFS V3.)

The major difference between NFS V2 and V3 is the open-to-close semantics. This permits the client to issue async writes, pretty much forever, until the close happens. At that point in time, the clients buffers are flushed to the server, and the server commits its cache to stable storage.

NFS V4 is a completely different beastie. In that world, the client can request a file delegation and takes ownership of the file, and performs I/O locally, until another client accesses the file, or the file is closed. For the specifics, please read the NFS V4 spec.
 
what about 4GB

Hi,

In the Netherlands we have some options as to add 3GB to the cache of the QNAP TS-509, which means you are suddenly working with 4Gb. What I wonder is if it helps a file of 4,37 Gb at-ALL to have 4Gb instead of 1Gb? Or will both memories work at the crappy speed because it's 1 file and simply does not fit entirely in cache?

Thx for helping me out here,
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top