What's new

New To The Charts: Cached Performance Filters

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

00Roush

Very Senior Member
Looks good. Seems to bring the ranking more inline with what one would expect.

On a side note... Have you had a chance to do any testing with the Qnap TS-509 PRO on your new testbed?

00Roush
 
Thanks for the feedback.

The 509 Pro has long since gone back to QNAP. I have asked them to resubmit it for retest. But so far, they have not sent one... or any other of their new NASes.

I'm still waiting to hear back from them on the 509 Pro, 439 Pro, 539 Pro, etc...
 
I appreciate the effort, but don't think much of it as it stands. To me, it's a gory hack which merely cuts out the most obviously objectionable part of cached measurements, while still leaving in all the rest and even hiding it better from those who would at least notice the absurdity of "server" measurements exceeding theoretical network bandwidth.
 
I had actually wondered a bit about this myself, so it is nice to see it being addressed. Good work! :)

-Biggly
 
I appreciate the effort, but don't think much of it as it stands. To me, it's a gory hack which merely cuts out the most obviously objectionable part of cached measurements, while still leaving in all the rest and even hiding it better from those who would at least notice the absurdity of "server" measurements exceeding theoretical network bandwidth.
Ok... What would you propose as the solution?
 
Ok... What would you propose as the solution?

There are many potential alternatives. I think the Vista file transfer is in itself a great advance, as there's really no arguing that it gives a meaningful result -- file transfer is really a standard, and very large file transfer not only solves cache size questions, but is increasingly becoming real application usage. A single day's worth of snapshots can exceed 4 GB, and a single DVD image 8 GB, and a Blu-ray image can make gigabit feel like wireless.

Vista transfers are a slice of "file transfers", and its main flaw is that it doesn't capture other common cases, e.g. XP-32 transfers. Add XP performance while it's still common, and you've got a good "real world" benchmark. But you don't have to show XP and other client or transfer tool performance for every case (unless you want to) -- you could just set up a couple of high-performance cases to illustrate the differences the clients make (e.g. XP vs. Vista, single drive vs. RAID array, and even extend this to not-so high performance -- 100 Mb/s vs. gigabit, and perhaps even leading wireless-g, n), and then normalize to a high-performance client test leaving the server as the expected bottleneck. (You've probably even done some of this already, but I don't recall the presentation at this time.)

In short, Vista and even RAID on the client can be real world, but it's not the real world for everyone -- factor out a sampling of the real world clients, and then focus on seeing the limits of the server. Drop even the 100 Mb/s performance for that, as there's typically not much to say there -- if one's really concerned about wired performance, then gigabit is an available and affordable solution; 100 Mb/s is not as hard or interesting to saturate; the differences among servers in that respect aren't great, and if they are, then that should also be typically visible with gigabit client testing.

For other "benchmark" applications, even something as simple as ATTO can also give you a nice and fast view of performance at a read vs. write level factoring out the client HD speed but including Windows protocol issues and showing the effects of buffer size on performance. Not all buffer sizes are meaningful, and ATTO is not the most thorough, but it's a good start in that direction as it often shows decent correlation with actual file transfers.

IOMeter can be used to construct a suite that's more thorough and gives essentially the same or richer set of results.

I haven't played much with it, and find the Intel-centricity and weight of it unpleasant, but Intel's NAS benchmark is also interesting, and if I had a month or so to digest it and it worked out, it or something like it could in theory end up being a "comprehensive" suite.

But I'd probably go with IOMeter off a Vista client using a (largely) sequential test suite constructed by checking the access patterns of some common applications (e.g. file transfer, Photoshop). SysInternals and other more detailed capture tools (some from Intel) could be used for this.

But whatever you do, please please please don't take StorageReview's approach of creating a secret unreproducible test suit -- let us also know enough to be able to reproduce your test configuration and see how it works in our own environments. Thanks.
 
Thanks for the thoughtful reply, Madwand. However, changing the test method "restart the clock" in terms of being able to compare products. I am trying to not do that at this time, because of the recent change to the new testbed.
 
Ok... What would you propose as the solution?

I'd suggest to eliminate all numbers for file size smaller than 2G (or 1G). Above 2G, it'd be very unlikely for the local cache to have a significant effect on the results.

we can't just look at the read numbers. Some NASen have very different read-vs-write performance.
 
Roush, the TS509 maxes out at ~80MB/s read and 50MB/s writes on our RAID 0 workstations running Vista SP1 once the 4GB cache is exhausted, so likely Tim would see similar.

The filter is good..and having the choice to use it even better! For one thing, if you're interested in bouncing files back and forth under the cache limits of both workstation and server, then I'd want to see how the unit behaves with smaller files too. As soon as you start accessing the unit from several workstations (as we do in the SMB environment) then it's nice to know how much RAM is supported on the NAS, and in my mind, should be part of every review. On the other hand, any slightly stressed NAS will run out of cache even under slight loads and at that point the disk performance under multiple loads is everything! For writes from a fast disk array to NAS cache, we're consistently seeing about 120MB/s which quickly settles in the real read/write rates as files exceed 4GB. So that number is what the filtered results should tell you.

With the READYNAS Pro results we're getting, I'd go a step further and add load testing with the NAS units connected via LACP to two identical test beds. The reason I'd suggest this is that the ReadyNas PRO's aggregate disk IO is now exceeding a single gigabit LAN connection's functional performance. As a business user, these numbers are what I'm looking for.

Cheers,
 
Last edited:
With the READYNAS Pro results we're getting, I'd go a step further and add load testing with the NAS units connected via LACP to two identical test beds. The reason I'd suggest this is that the ReadyNas PRO's aggregate disk IO is now exceeding a single gigabit LAN connection's functional performance. As a business user, these numbers are what I'm looking for.
The suggested test might be doable on an occasional basis for products whose performance warrants it. But for most products, it wouldn't be worth it for either the time or $ involved.

On a similar note of "not worth it", I'm considering dropping 100 Mbps tests unless a product's gigabit-connected performance is < 12.5 MB/s. NASes now all come with gigabit LAN and 100 Mbps performance is generally > 12.5 MB/s. That's why I changed the NAS Chart default view to gigabit -- the 100 Mbps view isn't very helpful.

This would be a big time-saver, since the 100 Mbps tests running up to 4 GB file sizes take awhile...
 
How do you get decent throughput on QNAP?

Roush, the TS509 maxes out at ~80MB/s read and 50MB/s writes on our RAID 0 workstations running Vista SP1 once the 4GB cache is exhausted, so likely Tim would see similar.

The filter is good..and having the choice to use it even better! For one thing, if you're interested in bouncing files back and forth under the cache limits of both workstation and server, then I'd want to see how the unit behaves with smaller files too. As soon as you start accessing the unit from several workstations (as we do in the SMB environment) then it's nice to know how much RAM is supported on the NAS, and in my mind, should be part of every review. On the other hand, any slightly stressed NAS will run out of cache even under slight loads and at that point the disk performance under multiple loads is everything! For writes from a fast disk array to NAS cache, we're consistently seeing about 120MB/s which quickly settles in the real read/write rates as files exceed 4GB. So that number is what the filtered results should tell you.

With the READYNAS Pro results we're getting, I'd go a step further and add load testing with the NAS units connected via LACP to two identical test beds. The reason I'd suggest this is that the ReadyNas PRO's aggregate disk IO is now exceeding a single gigabit LAN connection's functional performance. As a business user, these numbers are what I'm looking for.

Cheers,

Hi, I'm wondering if you can give me some pointers on how to get decent data rates. I'm fairly puzzled that I can't get jumbo frames to work even though the switch is set, the Marvell Yukon card is set and the QNAP is set. I ping the qnap with -l 9000 -f and it says it has to fragment it. If I do -l 8972 -f it is happy, but that's not the size of the packet the netcard sends.
I think I have even more systemic problems because cutting out the switch and even going from laptop to laptop at gigabit, I'm still seeing only about 27 MB per second. The source machine is no slouch. It's a dell M6400 with Raid1 sata drives. Network utilization shows at most 27 percent. I'm not sure why. Obviously, if I can't get two laptops to move data quickly, I can't move on to solving whatever problem may exist with the QNAP beyond the fragmentation problem. Any tips on where to start?

-Bob
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top