What's new

30TB FreeNAS Hardware List & Questions

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Mr. Bungle

Occasional Visitor
Hi everyone - this is my first NAS build and I'm hoping to get some feedback on my hardware choices.

Here's some background info about my usage requirements and patterns:

  • My main goal is to consolidate my file storage and provide some redundancy for peace-of-mind. I currently have 9 1.5TB disks in a JBOD configuration (scary, I know), 2 2TB disks in a USB dual external drive, and 2 1.5TB individual 1.5TB USB external drives. My plan is to reuse the 9 1.5TB disks for a separate project, and the rest I can either use for partial backup of my NAS, sell, or use for something else.
  • I have ~2TB of widely assorted files that I *really* don't want to lose (music, photos, work-related files, personal files, etc.).
  • I have about 16TB (and gradually growing) of video, almost entirely in files of ~3-50GB, which I can reconstruct if absolutely needed - but that would still be a huge time sink so I'm inclined to keep it as safe as I can. I'm not ready to spend the money necessary for a true backup solution, so I'm hoping that RAID-Z2 is sufficient there. I'm aware that RAID != backup, and I think I've gotten a good sense of the tradeoffs and possible pitfalls - but I'm willing to accept a bit of risk with these files to save a good amount of money.
  • In most cases, losing an occasional file or two due to corruption, bit rot, etc., wouldn't be the end of the world, but I'd like to try to avoid it, or at least become promptly aware of it.
  • The most frequent usage will be Time Machine backups from two Macbooks and a Mac Pro (approx. hourly during the daytime), followed by watching video on a networked HTPC.
  • It will usually be accessed by one, but maximum two simultaneous users on a LAN.
  • Internet access is a plus, but not required.
  • Downtime isn't a big concern (even a couple days is fine).
  • I'm hoping to keep power usage very low. The system will be idle much/most of the time, so I'm expecting to rely on Intel or AMD's latest power-saving technologies (e.g., Intel SpeedStep or AMD's EPU) for this.

My plan is to build a ZFS/RAID-Z6 based FreeNAS box, but I'm open to any other suggestions or ideas. I'm on the fence about whether to give special treatment to the ~2TB of more crucial files - i.e., create a separate RAID array for those or the like. I'd appreciate any opinions there.

Considering all of that, I've done a couple weekends'-worth of research and have come up with a lot of questions based on a FreeNAS build:

  • Do I need to worry about support for my 3TB Hitachi drives? If I understand correctly, modern 64-bit OSs should support them out of the box as storage drives, and the compatibility issues mainly arise when they're used as boot drives; is that correct? The HBA that I've chosen (HighPoint Rocket 620) was actually bundled with the earliest 3TB drives on the market, so it should have solid support, I think.
  • A commenter on another thread suggested that the 512KB sector size of the Hitachi 3TB drives could cause performance issues. Any idea what he is referring to there?
  • Is ECC memory advisable (i.e., worth the $50-100 total premium)? I've heard mixed opinions, but the more legitimate sources seem to say it is. I assume registered/buffered is not that important?
  • Would I be better off with an Intel-based server board with ECC memory (like http://www.newegg.com/Product/Product.aspx?Item=N82E16813121526), or is an Asus AMD desktop motherboard sufficient? (The latter supports ECC memory.) The main advantage to the Intel that I can see is onboard dual LAN ports, but are there others I'm not aware of? I like the Asus route because they support nice consumer-grade features like dynamic power efficiency, onboard video (for cheap/easy initial setting up and BIOS tweaking), USB 3.0, etc. However, will I be able to use all these features running FreeNAS (primarily the EPU power-saving feature)?
  • Is AMD a first-class citizen in the FreeBSD/FreeNAS world, or is Intel a safer bet in terms of stability and driver availability?
  • What kind of CPU do I need to ensure that it won't be a performance bottleneck? (Please see my build list below for my best guess.)
  • Is 8GB RAM sufficient? Does the need for RAM capacity increase with more storage space?
  • Is the commonly-used Realtek 8111E LAN Chipset enough to see the ~100MB/s ideal local network speed, or will I need a good Intel NIC? Are there any other typical bottlenecks to worry about?
  • Is Cat6 cable length much of an issue in terms of performance degradation? My longest stretch will be about 30-40 feet.
  • I found a write-up (http://lime-technology.com/home/87-for-system-builders) that highly recommends a single-rail PSU, but that wasn't until I'd already bought a dual-rail PSU (Antec EarthWatts EA-380D). Is this likely to be a problem?
  • Is there anything else I haven't considered, or may be missing?

Based on my research, current level of understanding, and inclinations, I've put together this build list:

ALREADY OWN:


READY TO BUY:


How does that look? I'd appreciate any and all feedback - please let me know if I missed anything, or need to provide any more information. Thanks in advance!
 
Again an impressive bit of research, kind of daunting to respond to, but let me try.

  • Do I need to worry about support for my 3TB Hitachi drives? If I understand correctly, modern 64-bit OSs should support them out of the box as storage drives, and the compatibility issues mainly arise when they're used as boot drives; is that correct? The HBA that I've chosen (HighPoint Rocket 620) was actually bundled with the earliest 3TB drives on the market, so it should have solid support, I think.

    No OS support isn't an issue, just HBA support.
    Highpoint is more of a semi-intelligent Sata controller, it does require more of your CPU than an intelligent controller card. Both Syba and Intel make good cards.

  • A commenter on another thread suggested that the 512KB sector size of the Hitachi 3TB drives could cause performance issues. Any idea what he is referring to there?

    He was referring to 512e, this is where the drive emulates a 512 block size, but has a larger block size internally. This has been known to cause problems.

    The Hitachi drives use a native 512 block size.

  • Is ECC memory advisable (i.e., worth the $50-100 total premium)? I've heard mixed opinions, but the more legitimate sources seem to say it is. I assume registered/buffered is not that important?

    As a rule of thumb. If possible use as much, and highest quality memory as possible. But given the scenario of usage you describe, I don't see a compelling need for ECC memory. If you were running 10mil transactions an hour to the machine ECC would be more critical.

    Think of it this way, there is a probability curve associated with any memory of an error occurring, say 1 chance in 50billion transactions, with ECC that is reduced by at least an order of magnitude, 1 in 500billion transactions.

    The question is, how likely is say a bit flip likely to be a critical issue? That is negligible in and of itself. Since you are not calculating missile trajectories, I wouldn't be that concerned.

  • Would I be better off with an Intel-based server board with ECC memory (like http://www.newegg.com/Product/Product.aspx?Item=N82E16813121526), or is an Asus AMD desktop motherboard sufficient? (The latter supports ECC memory.) The main advantage to the Intel that I can see is onboard dual LAN ports, but are there others I'm not aware of? I like the Asus route because they support nice consumer-grade features like dynamic power efficiency, onboard video (for cheap/easy initial setting up and BIOS tweaking), USB 3.0, etc. However, will I be able to use all these features running FreeNAS (primarily the EPU power-saving feature)?

    Intel NICs are faster, cooler, and rely less on the CPU. Also the drivers are better, Intel NICs are a no-brainer.

    I have been of late exceptionally disappointed with Asus customer support, it is crap (just look at the router issues raised here, their graphics cards also have issues). Currently SuperMicro and Asrock are my personal faves. The Supermicro support is second to none, and they make excellent server boards.

  • Is AMD a first-class citizen in the FreeBSD/FreeNAS world, or is Intel a safer bet in terms of stability and driver availability?

    I prefer Intel, but others differ. AMD has a better price performance curve, Intel (IMHO) is just more solid, better supported. Doesn't make a huge difference

  • What kind of CPU do I need to ensure that it won't be a performance bottleneck? (Please see my build list below for my best guess.)

    I thought you were looking at some of the new Sandy Bridge boards?

  • Is 8GB RAM sufficient? Does the need for RAM capacity increase with more storage space?

    (addressed this over on the other thread) RAM is performance, more RAM better performance, modern 64bit OSes will take advantage all of the memory they have.

    That said, 8Gig is more than sufficient. Take a look at the commercial NASes, none of them run with 8Gig of memory, an though the memory requirements of ZFS are greater, I very much doubt you'll see a bottleneck here.

  • Is the commonly-used Realtek 8111E LAN Chipset enough to see the ~100MB/s ideal local network speed, or will I need a good Intel NIC? Are there any other typical bottlenecks to worry about?

    Use Intel

    Driver support, and reliance on the CPU make Realtek significantly less attractive to everyone except the manufacturer of MBs

  • Is Cat6 cable length much of an issue in terms of performance degradation? My longest stretch will be about 30-40 feet.

    Not an issue

  • I found a write-up (http://lime-technology.com/home/87-for-system-builders) that highly recommends a single-rail PSU, but that wasn't until I'd already bought a dual-rail PSU (Antec EarthWatts EA-380D). Is this likely to be a problem?

    I use single rail. With dual rail there is an issue with power availability where you need it. But this is not a big issue, given a large enough high efficiency power supply.

    For me, warranty is a bigger issue, since most of the machine failures I've had are PSU related.

    Antec makes good power supplies.

    With 10x3TB drives, 350W strikes me as on the low end, notably during spin up - have you done your research on that?

  • Is there anything else I haven't considered, or may be missing?

I think running from USB thumb boot drives is an issue, but you'll figure it out.

I take it hot swap is not an issue for you?
 
Last edited:
My plan is to build a ZFS/RAID-Z6 based FreeNAS box
I presume you mean RAID-Z2 (equivalent to RAID6).

Do I need to worry about support for my 3TB Hitachi drives? If I understand correctly, modern 64-bit OSs should support them out of the box as storage drives, and the compatibility issues mainly arise when they're used as boot drives; is that correct? The HBA that I've chosen (HighPoint Rocket 620) was actually bundled with the earliest 3TB drives on the market, so it should have solid support, I think.

A commenter on another thread suggested that the 512KB sector size of the Hitachi 3TB drives could cause performance issues. Any idea what he is referring to there?
He was referring to 512e, this is where the drive emulates a 512 block size, but has a larger block size internally. This has been known to cause problems.

The Hitachi drives use a native 512 block size.
Yes, 512e (512 byte emulation) as opposed to 4kn (4 kilobyte native) is the issue here. As GregN said though, these Hitachis are 512 byte native disks, so the only issue you will have is with addressing more than 2.19TB in the partition table, your OS will need to be able to read GUID Partition Tables.

Is the commonly-used Realtek 8111E LAN Chipset enough to see the ~100MB/s ideal local network speed, or will I need a good Intel NIC? Are there any other typical bottlenecks to worry about?

125MB/s is theoretical speed, you should probably only expect about half that:
http://www.tomshardware.com/reviews/gigabit-ethernet-bandwidth,2321-7.html

Is Cat6 cable length much of an issue in terms of performance degradation? My longest stretch will be about 30-40 feet.
Think of cat6 as cat5e for extended runs. Cat6 is only really useful for long cable runs (greater than 50 metres), even then you will see very little, if any, performance gains. Unless you want to waste money or plan to move to 10Gbit, stick with cat5e.
 
GregN, thanks so much! You've been exceedingly helpful.

  • Do I need to worry about support for my 3TB Hitachi drives?
No OS support isn't an issue, just HBA support.
Highpoint is more of a semi-intelligent Sata controller, it does require more of your CPU than an intelligent controller card. Both Syba and Intel make good cards.

Ah, good to know. I chose the HighPoint maybe not for the best reasons - low price, brand recognition, and because I was confident that it has 3TB drive and FreeBSD support. What would you recommend in the Syba/Intel department - something like this?

http://www.newegg.com/Product/Product.aspx?Item=N82E16816124045

It seems like 2x SATA HBAs are the cheapest type out there - even more than twice as cheap as 4x SATA HBAs - so it seems to make sense to stick with those, assuming I have enough onboard SATA ports and available PCI-E 2.0 slots. Let me know if you'd recommend a different approach.

  • A commenter on another thread suggested that the 512KB sector size of the Hitachi 3TB drives could cause performance issues. Any idea what he is referring to there?
He was referring to 512e, this is where the drive emulates a 512 block size, but has a larger block size internally. This has been known to cause problems.

The Hitachi drives use a native 512 block size.

Gotcha. Sounds like I should be OK there, then.

  • Is ECC memory advisable (i.e., worth the $50-100 total premium)?
As a rule of thumb. If possible use as much, and highest quality memory as possible. But given the scenario of usage you describe, I don't see a compelling need for ECC memory. If you were running 10mil transactions an hour to the machine ECC would be more critical.

Think of it this way, there is a probability curve associated with any memory of an error occurring, say 1 chance in 50billion transactions, with ECC that is reduced by at least an order of magnitude, 1 in 500billion transactions.

The question is, how likely is say a bit flip likely to be a critical issue? That is negligible in and of itself. Since you are not calculating missile trajectories, I wouldn't be that concerned.

That makes sense. I guess where I'm falling short of understanding is in what the symptoms of a flipped bit really are. Does that mean I've lost a file? Is the file still usable, but is just ever-so-slightly corrupted (like an off-pixel in a photo, a garbage character in a text file, etc.)? Will I run the risk of a failed RAID rebuild? I think that knowing what the implications actually are will help me to decide whether it's worth the extra hardware cost.

  • Would I be better off with an Intel-based server board with ECC memory, or is an Asus AMD desktop motherboard sufficient?
Intel NICs are faster, cooler, and rely less on the CPU. Also the drivers are better, Intel NICs are a no-brainer.

I have been of late exceptionally disappointed with Asus customer support, it is crap (just look at the router issues raised here, their graphics cards also have issues). Currently SuperMicro and Asrock are my personal faves. The Supermicro support is second to none, and they make excellent server boards.

I've heard enough good things about Intel NICs that I think I should probably make that a build requirement - either one of these:

http://www.servethehome.com/intel-gigabit-ct-desktop-ethernet-adapter-pcie-x1-quickreview/

or an integrated Intel network controller if I opt for a server motherboard.

I've also heard great things about SuperMicro, although I wasn't as sure about ASRock. I've personally had good luck with Asus, but I realize that it's when the bad luck strikes that you really get a sense of a company's integrity. I was drawn to Asus for a motherboard mainly because they almost universally support ECC on their AMD consumer-grade boards. I'm still on the fence about whether I need ECC or not.

  • Is AMD a first-class citizen in the FreeBSD/FreeNAS world, or is Intel a safer bet in terms of stability and driver availability?
I prefer Intel, but others differ. AMD has a better price performance curve, Intel (IMHO) is just more solid, better supported. Doesn't make a huge difference

I also prefer Intel - I'm considering AMD mainly for the reasons in the previous port (consumer-grade ECC support). I was mainly wondering if using AMD (if it proves to be cost-effective) for a FreeNAS build would significantly limit my options/choices, or whether I should be OK. The gist I'm getting is also that AMD is OK, but Intel is ideal.

  • What kind of CPU do I need to ensure that it won't be a performance bottleneck?
I thought you were looking at some of the new Sandy Bridge boards?

Yeah, originally I was until I felt the desire for ECC. That's really all that has drawn me to the AMD world.

On the Intel side, I was considering a G620 until I noticed that the G530 is now shipping:

http://www.newegg.com/Product/Product.aspx?Item=N82E16819116409

It seems like the main difference between the two (besides 0.2GHz of clock rate) is in the video support, so the G530 would be the better deal at $20 less. Would you agree?

  • Is 8GB RAM sufficient?
(addressed this over on the other thread) RAM is performance, more RAM better performance, modern 64bit OSes will take advantage all of the memory they have.

That said, 8Gig is more than sufficient. Take a look at the commercial NASes, none of them run with 8Gig of memory, an though the memory requirements of ZFS are greater, I very much doubt you'll see a bottleneck here.

OK, cool. At least every motherboard I've considered has 4 RAM slots, so I can always upgrade to 16GB total if needed. I'm planning on a 2x4GB set initially to leave me that option for expansion.

I use single rail. With dual rail there is an issue with power availability where you need it. But this is not a big issue, given a large enough high efficiency power supply.

For me, warranty is a bigger issue, since most of the machine failures I've had are PSU related.

Antec makes good power supplies.

With 10x3TB drives, 350W strikes me as on the low end, notably during spin up - have you done your research on that?

Unfortunately, I haven't. I didn't come across that write-up until I already had the PSU installed in my case (good limited-time offer on NewEgg). Will I risk any damage if I just try it to see if it's sufficient? Or, is there a good way to tell exactly what I'll need and what the PSU can handle? This isn't an area I'm very familiar with; I've seen power usage calculators that are meant to help with this, but most are so generalized as to seem of limited value.

  • Is there anything else I haven't considered, or may be missing?

I think running from USB thumb boot drives is an issue, but you'll figure it out.

I take it hot swap is not an issue for you?

I thought that booting from a USB thumbdrive was a pretty common practice and stable in FreeNAS 8; or are you referring to other types of issues?

Hot swap isn't a concern at all. So long as I can be promptly notified of a drive failure, I'm perfectly fine just turning the system off while the replacement drive is in the mail, and rebuilding offline. To be honest, I know that RAID is about availability (not backup), but I'm intending to use it more like backup, with little concern for availability (as it's just for personal use). I'll definitely back up my *really* important files offsite, but the vast majority of space will be used for video files that I can restore if all else fails (it's just a lot of time investment that I'll pay to have a good chance of avoiding).

So, in spite of the excellent help that you and others have given me and all of the extra knowledge I've gained through my research, I think I've reached that point that one comes to where I'm more unsure than ever about what direction to take. So many approaches will work, but I just don't know what's ideal.

If you were me, what hardware would you pick (mainly motherboard and HBA(s))? At this point I've considered everything from Zacate, to Sandy Bridge, to an Asus/AMD consumer mobo with ECC support, to server-grade hardware like a SuperMicro board that you mentioned. When it all comes down to it, I just need 10 SATA2 ports; an Intel NIC, adequate CPU, and as many power-saving features as possible are all want-to-haves. My budget isn't fixed, but I'd feel great if I could get away with <$300 for the motherboard, CPU, HBA(s) (if needed) and Intel NIC (if needed). Zacate had me at <$200 for all that; Sandy Bridge or AMD/Asus at ~$200-250 (the latter being a little more due to ECC memory), and I'm unsure so far about the server-class route.

(By the way, do you have a blog or site or organization or something that I can donate a few bucks to as some thanks for your guidance?)
 
Thanks, Hydaral!

I presume you mean RAID-Z2 (equivalent to RAID6).

Sorry, yeah - RAID-Z2.

Yes, 512e (512 byte emulation) as opposed to 4kn (4 kilobyte native) is the issue here. As GregN said though, these Hitachis are 512 byte native disks, so the only issue you will have is with addressing more than 2.19TB in the partition table, your OS will need to be able to read GUID Partition Tables.

Gotcha, OK. That makes sense.

125MB/s is theoretical speed, you should probably only expect about half that:
http://www.tomshardware.com/reviews/gigabit-ethernet-bandwidth,2321-7.html

That's an interesting article. Since HDD speed seems to be a common limiting factor, do I need to worry about the performance of my RAID-Z2 array? I thought I'd read somewhere that a RAID-Z array only performs at a speed analogous to one of its component drives (which would probably be a bottleneck in this case), but I could be confusing that with something else.

Think of cat6 as cat5e for extended runs. Cat6 is only really useful for long cable runs (greater than 50 metres), even then you will see very little, if any, performance gains. Unless you want to waste money or plan to move to 10Gbit, stick with cat5e.

Cool, thanks - that's good to know. I've actually seen pretty comparable prices for Cat5e vs. Cat6 (e.g., at MonoPrice) so I figured I'd just be safe, spend a couple extra bucks and go with Cat6.
 
That makes sense. I guess where I'm falling short of understanding is in what the symptoms of a flipped bit really are. Does that mean I've lost a file? Is the file still usable, but is just ever-so-slightly corrupted (like an off-pixel in a photo, a garbage character in a text file, etc.)? Will I run the risk of a failed RAID rebuild? I think that knowing what the implications actually are will help me to decide whether it's worth the extra hardware cost.
The only symptom of a bitflip will be either a file that doesn't work at all or does work, but isn't quite right, the system may also crash altogether. I just tried with a PNG image, flipping a bit about halfway through the file corrupts everything from that point onwards: the bottom half of the image is either distorted or black. I also tried it on a text file, it changed a "c" to an "s".

It will not affect RAID/ZFS operations, everything in the system (including non-ECC RAM) will think the file is perfectly fine, which from a data point-of-view (as opposed to content), it is.
 
Ah, good to know. I chose the HighPoint maybe not for the best reasons - low price, brand recognition, and because I was confident that it has 3TB drive and FreeBSD support. What would you recommend in the Syba/Intel department - something like this?

http://www.newegg.com/Product/Product.aspx?Item=N82E16816124045

It seems like 2x SATA HBAs are the cheapest type out there - even more than twice as cheap as 4x SATA HBAs - so it seems to make sense to stick with those, assuming I have enough onboard SATA ports and available PCI-E 2.0 slots. Let me know if you'd recommend a different approach.

More this

It is x1 PCIe and ideally want 4x PCIe 2, but that can be tough to find, the promise card and intel cards are a bit steep. If you can find it, the ASUS Model U3S6 USB 3.0 & SATA 6Gb/s Add-on card looks interesting.

Make sure the card supports port multiplication (FIS if possible)

That makes sense. I guess where I'm falling short of understanding is in what the symptoms of a flipped bit really are. Does that mean I've lost a file? Is the file still usable, but is just ever-so-slightly corrupted (like an off-pixel in a photo, a garbage character in a text file, etc.)? Will I run the risk of a failed RAID rebuild? I think that knowing what the implications actually are will help me to decide whether it's worth the extra hardware cost.

The probability of memory errors occurring is very low, Google claims it watched for a month across an entire data center, it it didn't happen once, other folks claim 4gig will have it happen at least once every 72 hours. Remember some memory has parity checks and ECC is on top of that.

For a memory error to be a problem it has to hit active memory ( not high memory on a buffer fill or a instruction set after it has been written to CPU core ). Memory is in flux all the time, some is fallow, some is data and some is instructions. ECC only handles two bit flips, if more then you'd fail in the same way. It just reduces the probability.

I've heard enough good things about Intel NICs that I think I should probably make that a build requirement - either one of these:

http://www.servethehome.com/intel-gigabit-ct-desktop-ethernet-adapter-pcie-x1-quickreview/

or an integrated Intel network controller if I opt for a server motherboard.

I prefer the pro models, for about the same money, but supports link aggregation and is more tunable, if you ever go that way.

I've also heard great things about SuperMicro, although I wasn't as sure about ASRock. I've personally had good luck with Asus, but I realize that it's when the bad luck strikes that you really get a sense of a company's integrity. I was drawn to Asus for a motherboard mainly because they almost universally support ECC on their AMD consumer-grade boards. I'm still on the fence about whether I need ECC or not.

I don't know if it is in your budget, but the SuperMicro C7Q67 looks sweet.

On the Intel side, I was considering a G620 until I noticed that the G530 is now shipping:

http://www.newegg.com/Product/Product.aspx?Item=N82E16819116409

It seems like the main difference between the two (besides 0.2GHz of clock rate) is in the video support, so the G530 would be the better deal at $20 less. Would you agree?

That was the way I was going to point ya.

I thought that booting from a USB thumbdrive was a pretty common practice and stable in FreeNAS 8; or are you referring to other types of issues?

Probably, DIYers like cheap.

Thumb drives Fail, they are a pain to format, they slow down, swapping can be an issue, performance is difficult to quantify...

I use an SSD and haven't look back.

Drive a DVD and a SSD from the MB Sata ( sata II? ), and put the 10 drives on your Sata III HBA.

So, in spite of the excellent help that you and others have given me and all of the extra knowledge I've gained through my research, I think I've reached that point that one comes to where I'm more unsure than ever about what direction to take. So many approaches will work, but I just don't know what's ideal.

If you were me, what hardware would you pick (mainly motherboard and HBA(s))? At this point I've considered everything from Zacate, to Sandy Bridge, to an Asus/AMD consumer mobo with ECC support, to server-grade hardware like a SuperMicro board that you mentioned. When it all comes down to it, I just need 10 SATA2 ports; an Intel NIC, adequate CPU, and as many power-saving features as possible are all want-to-haves. My budget isn't fixed, but I'd feel great if I could get away with <$300 for the motherboard, CPU, HBA(s) (if needed) and Intel NIC (if needed). Zacate had me at <$200 for all that; Sandy Bridge or AMD/Asus at ~$200-250 (the latter being a little more due to ECC memory), and I'm unsure so far about the server-class route.

This is easy to answer, this is the way I went.

The projects are similar, 30TB ( max 60TB raw ) feeding an HTPC - where I differ is I wanted cheap, and I wanted fibre channel for the article.

If I was to do it again, I'd go with the same motherboard/processors/memory but I'd go with Supermicro PCI-X Sata cards and ZFS using Nexenta or OpenIndiana running napp-it

And a quieter more expensive case.

Hope that helps.....
 
Last edited:
The probability of memory errors occurring is very low, Google claims it watched for a month across an entire data center, it it didn't happen once, other folks claim 4gig will have it happens at least once every 72 hours.
I subcontract for a government department with about 23 servers around the state, I have been here for 5 years and we've only had a single instance of a server reporting a single bit error in memory.

Remember most memory has parity checks and ECC is on top of that.
I was never clear about whether modern, non-ECC memory can actually detect errors.
 
I subcontract for a government department with about 23 servers around the state, I have been here for 5 years and we've only had a single instance of a server reporting a single bit error in memory.

I was never clear about whether modern, non-ECC memory can actually detect errors.

I agree, I've worked in servers rooms ( hundreds of servers) where failures happen all the time, not once has it be determined that it was a bit flip.

I think this is a religious argument, and paraphrasing Nietzsche, The bit is dead. If it is happening it is so lost in the noise it doesn't matter.

My understanding is that Parity can detect errors, whereas ECC actually corrects errors ( up to 2 bits flipping).

A correction, I'm not sure that parity memory is still common, it used to be,

The problem is program crashes due to memory errors ( jump to wrong address, bad instruction, shift to zero, etc ) could be occurring, often, but program crashes happen all the time due to so many other things (mostly bad code) that it would be hard to catch or distinguish.

Drivers and disks have retry modes to make them more fault tolerant, a memory failure there would also be not noticed.

I wouldn't notice small amount of bit rot in a modern compressed 1gig movie file.
 
Last edited:
Herr Bungle,

Have you forsaken us? Found a better source?

What progress made?

Howdy GregN - sorry for the slow reply - busy few weeks with work. I'm off this week, though so I had some time to work on my NAS build.

Thanks for another very helpful reply. Good news is that I got everything ordered and assembled! Bad news is that I'm now stuck with an issue. I posted a new thread on it here:

http://forums.smallnetbuilder.com/showthread.php?p=32618#post32618

If you wouldn't mind taking a look, I'd be thankful - always appreciate your input.

After a lot of consideration and research, I basically narrowed it down to three routes to take:

  • consumer motherboard with Intel 1155 CPU, non-ECC memory
  • consumer motherboard with AMD CPU and ECC memory support (e.g., via an Asus board)
  • server motherboard with Intel CPU, ECC, dual onboard Intel NIC, etc.
and ultimately decided to follow the middle one (AMD w/ECC). I heard a lot of doubts about the efficacy of ECC memory, but after pricing everything out, I realized that it would only be a $50-75 premium over the first option (mainly in the price of the memory), and I found an Asus AMD board that had everything else that I wanted at a good price. I figured that it was worth the extra few bucks just in case it brought anything useful to the table.

Again, thanks for all of your help, and I could still use some more in getting past this hurdle. ;)
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top