What's new

Adaptec 3405 PCIe RAID0 on Vista Performance

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Dennis Wood

Senior Member
We've been working for some time on various workstation configurations based on onboard RAID5, and RAID0 solutions. To this end, a fair bit of testing has been going on. For reference, here is an IOzone plot of the current setup on a Core2Duo workstation (Asus P5N32-SLI MB with Nvidia 680i chipset) running Vista SP1. We did not test RAID5 as we need write speeds over 150MB/s. If you plan on building your own NAS using Vista SP1, this is what you can expect as far as local performance goes. I'll have a plot later showing performance between these two workstations.

The ICH7 RAID 0 array uses 3x 320GB WD drives, running in a Core2Duo workstation with 2GB RAM and the Asus P5W-DH Intel chipset motherboard. The Adaptec 3405 PCIe card is in a Core2Duo workstation with 4x 500GB WD drives in RAID 0, using the Asus P5N32-SLI MB with Nvidia 680i chipset and 2GB of RAM.

Links to full-sized images:
http://www.cinevate.com/images/3405a.jpg
and
http://www.cinevate.com/images/3405b.jpg
 

Attachments

  • 3405a.jpg
    3405a.jpg
    25.7 KB · Views: 586
  • 3405b.jpg
    3405b.jpg
    34.2 KB · Views: 596
Last edited by a moderator:
Larger pic's please. Can't read the details.
That's probably my fault. I limited image upload size because some people were posting large images that were breaking the page layout.

Turns out they were posting links to hosted images, which I have disabled.
I have edited Dennis' original post with larger attachments and links to
full-sized images.
 
Dennis, this is with your teamed gigabit NIC setup, correct?
 
Other than the TS509, there are no teamed adapters in use anymore on the workstations. I had read that setting up trunks using LACL was a mess when different hardware vendors were brought into play, and that's what we're seeing. So the only thing that works well is to have one trunk enabled to one device. If you had the exact same hardware (NIC vendor) on each workstation, I suspect it would work better. The TS509 does benefit from using the load sharing mode to an LACL switch in a very obvious way when hit by multiple workstation requests.

In the newer firmware, the TS509 network drivers have been updated, which according to the switch stats, means that one load sharing NAS port is idle unless there is more than one workstation hitting it. Earlier firmware showed one port managing TX and the other RX even with one connection...and this was faster in the single workstation/single NAS scenario. The new code definitely works better in a multi-user scenario...but at the expense of single workstation reads (on files in the 200GB range) which are in the 90MB/s range now, instead of 100MB/s previously.
 
You never really mentioned it but I am guessing that the iozone tests were on the local RAID 0 arrays and not over the network. The graphs confused me a bit because I thought they were showing two different sets of data. Then I realized that the second one is just a zoomed in view of the first graph.

Are the drives you have been testing with newer generation drives?

From looking at the results it looks like either RAID 0 array could saturate a gigabit connection. Not too shabby.

Is there anyway you might be able to test with RAID 5? I only ask because I think many on the forums have wondered (well I have) if the add in RAID cards really do offer better RAID 5 performance than the onboard solutions.

Thanks,

00Roush
 
The drives are all SATA II. That's correct, they're both local tests..and the second is indeed just an expanded view. The workstations are still hovering around 90MB/s (reads and writes) on large files and encodes done between them. Keep in mind though that both are RAID 0. So although both workstations can locally perform at over 100MB/s the overhead in transfering (considering that our file set is 122 files, totaling 4.98GB) seems to keep the measured transfer rates under 100MB/s. Vista reports speeds over 100MB/s but this is only for the first few GB of data until the receiving unit's cache is exhausted.

My plan was to test RAID5 as well, however I'd predict a drop to about 80MB/s on the writes (actual measured). This is still 3 times faster than ICH7 onboard RAID 5 writes, and about 7 times faster than the Nvidia onboard RAID5...however 75MB/s is still too slow for our purposes..hence the RAID 0. RAID 5, even on a very fast card, is not adequate for uncompressed HD video editing. These workstations are backed up daily, so a drive failure is not something we're too worried about.

I've been trying to test between workstations using IOzone but some funky Vista SP1 businesss keeps trying to stop the IOzone file deletion/creation that happens during testing. It's not permissions but rather something to do with roaming profiles (although we're not using them) or the like. The forums seem littered with folks experiencing grief with Vista replacing deleted files due to a myriad of issues. I messed with it for a bit and then gave up as the behaviour is not an issue with our network setup otherwise. I tend to use our measured results anyways, as the benchmarks have quirks that make the results often questionable.
 
Last edited:

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top