What's new

DIY Builds, Lessons Learned

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

GregN

Senior Member
DIY Lessons Learned

So you are thinking of building yourself a box, be it a router, a NAS, or HTPC there are some tips in helping you make those build decision. Most of these have been covered in various threads here, I’m just putting them here for easy reference.

5/13 - This is a work in progress, your input, and further discussions of particular builds will be integrated in the future
5/14 - Added Specific build guidelines, The Router
5/15 - Turns out there is a post size limit, this post can't accrete endlessly, I've move Specific Builds writeup HERE
5/16 - Added Specific Build Guidelines, The NAS HERE
5/16 - Integrated Some Annotations from TK


Buying Components:

Price is often THE driving factor cobbling together a server, here are your options for components in increasing order of price:

  1. The cheapest is always going to be reusing hardware you already own.

    Pluses: No cost, what could be better?

    Minuses: Not the latest generation, probably not be green, may not be the best fit for the purpose - for example, you’ve got an old ATX case with power supply laying around – building a router out of it means you’ll have a router the size of a desktop machine that has a power supply three times bigger than you need.

  2. Purchasing used hardware and repurposing it to your needs, E-Bay is a great source but there are always risks with used hardware.

    Pluses: Low Price, able to snag industrial grade equipment and components. For example you can find a 3Ware RAID HBA, which retailed for several hundred dollars, for less than a hundred.

    Minuses: Not the latest technology, little support, no warranty – and you don’t always get what you think you’re getting. For example, I recently purchased what was listed as a LSI MR 300 RAID card which is on the VMWare HCL list – only to discover it was the Intel OEM version, which isn’t on the HCL list, making it work took more effort than expected.

  3. Buy integrated components, for example Motherboard & CPU, or Case & Power Supply. Usually sold at a price below the separate components.

    Pluses: Compatibility assured, the parts will definitely work together. And you won’t have to put them together.

    Minuses: Most likely a closed upgrade path, reuse is limited. For example, buying the Supermicro Mini-ITX Atom integrated motherboard, you are forever stuck with the CPU that is integrated – repurposing the hardware is often not an option. And the board will not age as well as a more flexible solution.

  4. Buy brand new established individual components, usually based on someone else’s build.

    Pluses: The safest path. Compatibility assured, and the price tends to be what you expect. You get a warranty, protection against DOA components, and maybe support.

    Minuses: Probably not the latest tech, a little more work

  5. Buy the latest tech, for example latest processor or graphics card. Be a trail blazer.

    Pluses: Bragging rights, high performance, the thrill of the hunt.

    Minuses: Over kill to the task. Compatibility problems. Maturity of drivers and other bleeding edge risks – but no guts no glory.


Notes on Components

Some of this is accepted wisdom, and has not been empirically verified. This is my list of preferences, others may disagree.

Motherboards

There are three motherboard makers that are well liked, SuperMicro, Intel, Asus and ASRock. As readers of this forum know I’m particular to SuperMicro, their quality and willingness to provide phone support on components that are long dead is unmatched.

Hard Drives

There have been reports of problems with NAS builds that use green drives, in particular Western Digital drives that have advanced formatting and emulate a 512 byte block size (512e). Additionally, I have a hard time forgiving WD for unacknowledged fatal problems ( firmware & alike), I just don’t trust them.

Seagate has also had recall problems, and has been implicated in price fixing schemes.

As noted by Claykin Samsung has been purchased by Seagate, and WD (my take) is trying to buy a better image by subsuming Hitachi.

My current favorites are Hitachi and Samsung – Never had a problem, but opinions differ.

PSUs
There are three issues with PSUs, how efficient, correct sizing, and warranty period. This is important, I’ve seen only two types of failures on all of my builds, Hard disks and PSUs. A good warranty can be a god send, and is often worth a few bucks more.

Green power supplies are often a case of pay me now or pay me later, you’ll save on your power bill usually well in excess of the additional cost of a high efficiency PSU.

For low power builds PicoPSUs have become popular, using external AC/DC wall warts. Several Mini-ITX cases are now packaged with this type of PSU.
Single rails are more important on NAS builds, providing the whole rating to the entire machine, but also make it easier to diagnose a failed PSU.

Corsair’s high efficiency modular single rail PSU with a five year warranty are my current favorite. Seasonic is responsible for OEM some of the best known PSUs, and is noted by Claykin in a followup.

NICs

Either integrated or as a card Intel NICs are considered the gold standard, with high quality drivers and flexibility. Other NICs are rely on more of your CPU (work is done in the driver and not actually on the card ). They can be easily aggregated if you chose to do so. I see no reason to go with less a 1G interface.

Memory

Though clouded in some controversy, go ECC if you can, it is one less worry for your server application – especially for software RAID based NAS.
With NAS builds, memory translates directly into performance. ZFS in particular is noted as a RAM hog.

With router builds, if you plan on running Snort or Squid any extra memory will not be wasted.

HTPC Graphics Cards

The current generation of integrated graphics chipsets are enough to handle most HD rendering, if going with a separate card, pay attention to what and how audio for your system will be supported. A single HDMI connection that integrates 5.1 sound is usually the easiest to work with, and configure on both ends.

When looking at HTPC builds using a high-end card will be both noisy, and be overkill. How much noise an HTPC produces is often a major question to be considered.

Cases

Beyond your own personal stylistic considerations, the only issues is issue of noise vs. cooling, and if it has the room you need to grow.

For higher capacity NASes, you'll need to decide if you want hot swap, and how to handle expanding the number of drives beyond the number supported by a motherboard (which is currently around 6 drives).

Norco and Chenbro both make excellent cases for higher capacity hot swap NASes.

HBAs

In doing a NAS build you will often need more SATA ports than a motherboard provides, I recommend a RAID card over that of SATA cards for both performance and reliability.

RAID cards, come in two forms, the true enterprise cards, LSI/3Ware, Areca, and Adaptec that do the heavy lifting on the card, and then those RocketRaid and Promise that do most of the work in the driver, relying on your CPU. Though attractive from the point of view of price, these cards are more really RAID assist cards, and you will need to size your CPU accordingly.

Recent write-up indicated that if you plan on virtualizing your NAS, you will get much better performance with a RAID card than a SATA card due to issues associated with synchronization in VMFS.
 
Last edited:
Greg

Great advice. A few notes from me.

1) Motherboard - I agree with you on Supermicro. Good stuff. How about Intel? They are one of the few that will offer advance replacements if necessary. Try getting that out of one of the Taiwan/China manufacturers. Plus Intel builds a quality board (most of the time).

2) PSU - As far as I'm concerned buy Seasonic PSU. Some Corsair models are manufactured by Seasonic but others are from Channel Well (They are OK, but not as well rated as Seasonic). Seems that Corsair is leaning more towards Channel Well for many new models. Oh well.

3) I usually recommend people stick with ATX as replacement parts are readily available for most items. Especially PSU's.

4) Drives - I've had good and bad ones from every manufacturer. Of late I've found that many Seagate "recertified" drives have problems from day 1. I've had too many experiences with 2-3 replacements before I get a good one. WDC has been a bit better with their "recertified" stuff.

Samsung is gone, its now part of Seagate. Hitachi was acquired by WDC in March 2012 but that deal is a little different. WDC will operate Hitachi as a separate division. Maybe good, maybe not so...

Regardless of brand, if I'm using in RAID (software or hardware) or in a critical application I always opt for Enterprise drives. Yes more $, but they are arguably built better, properly support TLER (or the like) and have vastly better URE specs (nonrecoverable read errors).

5) Cooling and more - Always provide adequate cooling. Seriously, this is important. Quiet is great, but cool is better. Also, make sure a UPS is part of your build. Providing stable power is necessary. If using a PFC PSU be careful with your UPS selection. Most will work with modified sinewave UPS units, but some poorly designed PSU's won't. For mission critical I always use a line interactive, proper sine wave UPS such as the APC SMT series.
 
Form factor I was planning on covering in a follow-up, you are absolutely correct, it is much easier to shop for ATX components than any other. I have found in particular you may need to shop other than NewEgg for the best Mini-ITX components ( SuperMicro for example ). I'll also try to cover the pitfalls of the various form factors.

Cooling, and cooling components (and the corresponding noise factors) is a separate issue that was also planned for a follow-up.

I'd also like to hear best or favorite vendors.
 
Last edited:
Buy the latest tech, for example latest processor or graphics card. Be a trail blazer.

Pluses: Bragging rights, high performance, the thrill of the hunt.

Minuses: Over kill to the task. Compatibility problems. Maturity of drivers and other bleeding edge risks – but no guts no glory.
The other thing you'll discover if you go this route is that everything you buy will be cheaper in a couple months. A year after that, your amazing performance will have become average. Hopefully you've picked good components which will operate reliably for a long time, ideally until after it comes time to replace the unit with something newer.

I built my original 4TB RAIDzilla in early 2005. In 2010, I replaced them with my 32TB RAIDzilla 2 design. I went with top-end components and hope to get 5 years of service out of these as well. They've been running for 2 years now, so I think I have a good chance.

Insides
Racked
500MB/sec IOzone (1.3GB/sec peak)

Basic components:
  • Supermicro X8DTH-iF motherboard
  • Xeon E5520 CPU (x2)
  • 48GB ECC RAM
  • 3Ware 9650SE-16ML RAID controller
  • 3Ware BBU
  • WD3200BEKT (x2, mirror for OS)
  • WD2003FYYS (x16, storage)
  • OCZ Z-Drive R2 P84 256GB SSD
  • CI Design NSR-316 chassis (semi-custom)
  • Slot-load DVD drive

Software:
  • FreeBSD
  • ZFS
  • SAMBA
  • Lots of other stuff

Cost for the first one was around $14K. A little over a year later it was down to a little over $9K.
 
That is why I started the thread with buying options, your (impressive) RAIDZilla has about the same specs as my Old Shuck, which was cobbled together using mostly used enterprise components off of EBay, and it cost loaded for bear less than 2.8K (mostly for the 20x2TB drives).

Must admit my only regret has been the case, it is loud and runs a little hotter than I would like.
 
Continued, Part Two.

Build Specific Notes:

Here is notes on specific builds of a Router, a NAS, and an HTPC.

Note on overloading:

Repeatedly in the forums I've seen folks say they want to build an uber-box, a NAS slash something else, a router or a HTPC, or all three.

I recommend against this for two primary reasons, one combining the functionality often calls for significant compromises - there are platform specific distros that make these projects very manageable, combining them requires selecting one distro and mutating it to meet your other needs, at best this is an uncomfortable compromise, at worst it is a recipe for baldness and maintenance nightmares.

The second reason is that if one of the platforms fails, say a bad drive or failed PSU, you have lost both pieces of equipment, not just that task specific up all the time box.


Routers

The most compelling reason for building your own router is added security, better monitoring, and in a lot of cases (VPN, Proxy, etc) better wired performance. Priced against a standard or even high-end router there is little savings, unless you just have the hardware laying around. But compared to a security appliance there is significant savings, especially considering most appliances require a subscription to the latest threat matrix.

Generally DIY Routers require adjunct components such as a switch, for port expansion, and an old router or AP for wireless network access ( you can use a wireless card, such as AsusN15 card or Intel wireless card, but the driver support is questionable, and the antenna configuration for wireless cards is at best sub-optimal, behind your case.


When deciding to build a DIY Router:

- Small form factor, mini-itx is ideal

- Green, low power CPU ( i3-2100, Atom, E-350 ) 35-80W

- No Reason to go less than 2GHz CPU: VPN, Traffic Level, and Speed may need more.

- Passive Cooling is generally sufficient, but dependent on the CPU

- Two or three NICs, ideally Intel, or a SuperMicro/Intel MB with dual gigabit Intel NICs

- Memory requires dependent on application/role
2GB - Routing & Firewall
4GB - Routing & Snort IDS
8GB+ - Snort IDS, VPN, Squid Proxy and other apps

- Can support enterprise features such as multi-WAN failover and load balancing.

- OS: pfSense is my personal favorite, others prefer UnTangle, Astaro, ClearOS, MoonWall - some of which require subscriptions.

An additional note: I'm a proud papa - I personally think one of the coolest bit of kit I've ever built is my router, it is amazing little box that has saved my bacon on more than one occasion. Can't recommend it highly enough. My pfSense box does at warp speed what no single router or appliance can do, and all for free, while looking good doing it.
 
Must admit my only regret has been the case, it is loud and runs a little hotter than I would like.
Having had 5 years of experience with my 1st-generation RAIDzilla when I started planning the new one, I figure I'll post a bunch of the things I learned. For now, I'll stick to hardware, since most people won't be running stock FreeBSD and thus the software things won't be particularly applicable.

This may be particularly timely as there's a new "RAID Is Not Backup" series on the front page.

Management: The motherboard I used in the original RAIDzilla (Tyan S2721-533) had an optional remote management add-in (M3289) that never worked properly, and subsequent BIOS updates disabled it completely. I ended up using simple serial console redirection in the BIOS and FreeBSD, which didn't give me the amount of control I wanted. The Supermicro motherboard I selected comes with complete remote management on-board. That uses its own dedicated Ethernet port and I can power the system off and back on, do a hard reset, etc. as well as having complete console redirection and IPMI functionality.

Monitoring: I wanted to be able to read, log, and graph every possible sensor reading. The Supermicro motherboard has lots of onboard sensors, and has a number of SMBus connectors to allow for the connection of additional sensors. The dual power supply in the CI Design chassis provides a pair of PMBus ports which plug into the motherboard, allowing monitoring and control of the power supply as well. Here is a static capture of the page I generate for one of the RAIDzilla 2's.

Interchangable expansion card slots: The Tyan board I used in the original 'zilla had a mishmash of expansion slots (1x 32-bit 5V PCI, 1x PCI-X 64-bit 100 MHz slot, and 2x PCI-X 64-bit 133MHz slots). The other 3 positions didn't have slots at all. This meant that the 2 "good" slots were right next to each other, which caused problems as I needed both for 8-port 3Ware boards. This caused problems with cooling as well as needing to create custom extension cords for the cache backup batteries. The Supermicro board I used in the 2nd-generation 'zilla has 7 PCI-E 2.0 x16 slots, so I can move things around for clearance and cooling.

Disk monitoring: The original 'zilla just had a drive activity LED for each drive. Later 3Ware controllers provided an I2C bus which could communicate with a number of different chassis. The new systems have 3 LEDs per drive bay and the controller operates them directly. I now have separate "ready", "access", and "fault" indications for each drive, which the controller operates in combinations to show things like SMART errors, soft read errors, and so on.

Better cooling and cable management: Cables in the first-generation units got out of hand (the initial unit was built with individual SATA cables, the next two used multi-lane cables) but there was still a lot of cabling in there as you can see on my original RAIDzilla page. The new system used fans on the back panel as well as the drive compartment, which does a better job of distributing the exhaust - the original units tended to pull hot air from the drives through the restrictive power supplies and then out the back. Even adding some "slot cooler" fans didn't make a lot of difference. As you can see in this picture, the 2nd-generation 'zilla has a lot less cabling clogging up the airflow. The only real excess cabling is from the power supply, and it isn't in the path of any airflow (the opening to the drive bay is blocked off, and the power supplies don't pull air from that side). The single SATA cables were selected for the correct length. The multi-lane SATA cables are a stock length. I could have had custom ones made, but the stock length isn't too excessive and doesn't impede airflow.

Separate operating system from NAS storage: In my original design, the OS (initially FreeBSD 4) was stored on the RAID array. The NAS storage was on separate FreeBSD partitions. While this did protect the systems from OS disk failures, it made upgrading the OS a bit trickier since there wasn't a way to write protect or disconnect the NAS partitions. With upgrades to FreeBSD 5, 6, and 7 there were changes to the MBR and partitions along the way and it required extra care to avoid clobbering the NAS storage. In the 2nd-generation design, I went with a separate pair of 320GB 2.5" drives, using FreeBSD disk mirroring (because of familiarity; I could have used the motherboard's soft RAID). This meant that the data could be write protected or removed completely during upgrades or for testing. It also meant that I could begin my initial integration testing with a single 2TB drive instead of needing to purchase 16 matched drives.

Room for hardware growth or replacement: I selected the E5520 CPUs and 48GB of registered memory as a good place to start. It is unlikely that I'll ever need to upgrade the CPUs, but I can expand to 96GB by adding the other 6 memory modules, or all the way to 192GB by replacing the 6 already in there with 12 denser (16GB) ones. I changed from an Ultra 320 SCSI board to a SAS board when I discovered that SAS tape libraries were more common (and thus less expensive) than SCSI ones. The 256GB SSD was a late addition to the build and dropped right in. I also changed from a tray to a slot-load DVD drive as the case tended to press on the top of the tray and caused eject problems. I could also add a 10GB Ethernet card at some time in the future.

Plan to open things up once the servers are racked: At some point in the lifetime of the system, you're going to need to get the top off for maintenance. The cover for the case I use has 2 large thumbscrews in the back and 4 small Philips screws at the front, all of which are easy to get to when the drive is racked (as long as there's at least a small gap between them as shown here. However, there is also a small Philips screw on each side of the chassis, around 2/3 of the way back where it is nearly impossible to reach when the server is installed in a rack. I learned this the hard way, and used a right-angle ratcheting screwdriver to slowly work those screws out without removing the servers from the rack. This was needed to swap to the slot-load DVD drive as mentioned in the previous paragraph, since that change was made after the units were racked. The most common failure will be in the hot-swap drive bays. Those drives can be swapped out during system operation. The 3Ware controller will clearly indicate which drive needs replacing by lighting the red fault LED.

Data protection: As this site will tell you, "RAID Is Not Backup". Aside from hopeful optimism, data protection falls into 2 broad categories - replication and archival backup. For replication, I use a second identical 'zilla in an offsite location about 3 blocks away. Fortunately, I have a fiber optic cable between here and there, so it is simply connected to my home LAN via a Gigabit Ethernet cable. A nightly job synchronizes the two servers while also maintaining 30 days of history (so I can restore to the state at any time between last might's backup and a month ago). For backup, I use a 48-slot LTO-4 library connected to my primary 'zilla via SAS. Current backups take about 16 tapes since I'm using much less than the total space on the RAIDzilla. Each month a full backup is performed and the tapes are stored offsite in another state. In general, files on the 'zilla aren't modified - instead, new files are added. This means that the nightly replication job is only copying newly-added data, not modified files. It also means that I could perform incremental tape backups instead of full backups. However, in the event I ever need to do a full restore from tape, I'd rather not deal with incrementals.
 
Last edited:
NAS

When building a DIY NAS, the first question need to answer for yourself, is "Is it Worth It?". The competitive environment around consumer NASes has made it so that building your 3-5 drive NAS offers little savings, maybe between 50 to 150 dollar savings, and for your savings you get no warranty, no support, a less polished product (performance and ease of use) plus a definite increased probability of configuration / build / maintenance headaches.

In my opinion, DIY becomes compelling when you move outside the current sweet spot of consumer NASes, beyond the 3-5 drive range, into the realm of high capacity, 10 - 20 drives. For the money you spend building a small NAS, say 350-450 dollars, bumping that 550 -650 dollars you can support 2 to 3 times that capacity.

All of that said, a properly built DIY NAS can offer more flexibility, and a cleaner upgrade path.


When deciding to build a DIY NAS:


- Hot Swap or Fixed Disk? Hot Swap just says NAS.

- Software Raid will offer more flexibility at lower cost. Hardware Raid offers slightly better performance and better reliability - but at a cost.

TK Note - With a sufficiently large enough cache software raid can exceed the speed of hardware raid.

- Pick non-integrated CPU, such as Sandy Bridge or the latest AMD chipset

- No Reason to go less than 2GHz CPU: additional applications such as Bittorrent, DLNA media serving, or transcoding capabilities might need more.

- Active Cooling is usually required, but dependent on the CPU

Where are you going to be running your NAS? If in the living room or bedroom the quality of the cooling, and noise becomes a major design constraint. You may need a better, quieter case.


- Two NICs, ideally Intel, SuperMicro/Intel MB with dual gigabit NICs. Two, because in the future you may want to aggregate your NICs to maximize available bandwidth.

TK Note, and as I've noted in other threads, Aggregation does not mean 2x speed, but if your network is multi-node, aggregation can offer better throughput. Consider your upgrade path.

- It is rare to find a Motherboard that supports more than 6 SATA ports, for larger capacity you'll need either a SATA or RAID HBA

Using a RAID controller even for JBOD offers staggered spin up, and caching. Staggered spin-up will require less of your PSU.

Pay attention to HBA specs, you may end up with a card that only supports 2TB SATA II drives, when you were hoping to someday upgrade to SATA III 4TB drives.

Consider your performance requirements when buying an HBA, Running 12 drives through port expanded PCIe x4 4 port SATA card is likely to create a bottleneck.


- Memory requirements dependent on application/role, ECC is recommended, and a good starting point is at least 4GB. Remember memory is performance.

With ZFS, the rule of thumb is 1GB memory for every Terabyte of storage, with hardware raid it is largely dependent on cache size, but you should shoot for at least 1GB / drive. With ZFS, and a base of 5x2TB that would be 10GB of memory - with a RAID HBA it would be 5GB of RAM.


- PSU, if you are running 5 or more drives, you'll need at least 350W, probably bigger depending on the HBA, the number of drives. Remember, a PSU is often a pay me now, or pay me later consideration - less expensive standard PSUs mean a bigger power bill.

RAID cards often offer a write back cache policy, which can offer a dramatic performance advantage (you write to memory, and the card writes that to disk). This requires either a RAID card BBU or a UPS, something that is wise to have anyways.


- Hard Drives, I prefer quality consumer drives, but others recommend enterprise grade drives. Recerts are a gamble.

Keep your boot and configuration on a small drive apart from your storage, it will allow for easier upgrades, and backups.

Using a SSD for your system drive can offer significant performance advantages, especially for Caching.

If you use a USB Flash drive for booting ( FreeNAS ), keep a hot spare ready in case of failure.


- OS: FreeNAS, UnRaid, NexentaStor, and OpenFiler (not recommended). UnRaid and NexentaStor come as size limited free editions. FreeNAS is the most popular, and best supported of all of the NAS distros.


An additional note, one of the major advantages of a DIY NAS is that it can grow, it can be upgraded, the components, and drives re-used. To accomplish that though, a major consideration in starting your build has to be your upgrade path. It is much cheaper to buy a slightly more capable PSU, then to completely replace one that doesn't make the grade. It is maybe 10-20% more for a used 12 drive RAID card than an 8 drive RAID card on EBay.

In my personal opinion, if you want 1 or 2 drive network storage, bypass a separate server all together, add those drives to your desktop or HTPC, and use mirroring software (Acronis, SyncBack) to mirror the storage.
 
Last edited:
When building a DIY NAS, the first question need to answer for yourself, is "Is it Worth It?". The competitive environment around consumer NASes has made it so that building your 3-5 drive NAS offers little savings, maybe between 50 to 150 dollar savings, and for your savings you get no warranty, no support, a less polished product (performance and ease of use) plus a definite increased probability of configuration / build / maintenance headaches.
For the general case, I agree. If you just want LAN storage, go with one of the units that gets good reviews here. If you want to do any local processing on the NAS (for example, video encoding), a consumer NAS may not have the CPU / memory to do this faster than on one of your client PCs. If the hardware is fast enough to land a unit in the top half of the benchmark scores here, there's little incentive for the vendor to go with much faster pieces. With dual Xeon CPUs, the RAIDzilla 2 can process and write local files far faster (500Mbyte/sec) than its network connection allows. Thus, there's a huge benefit to doing the processing locally.

Another issue to consider with a commercial unit is that some manufacturers seem to concentrate on making an attractive case and a flashy GUI, and performance and reliability take a back seat to that. There have been many reviews here (fortunately, mostly older ones - I think manufacturers are doing a better job lately) where pulling a disk out of the RAID either locked up the NAS, made the entire RAID volume inaccessible, etc.

You're also somewhat limited to the software they put on the NAS. Some vendors support 3rd-party plugins and some users have written their own. The vendor won't provide support for those, however. I set up a 2-bay NAS from a well-regarded manufacturer for a friend of mine. Everything worked flawlessly at her house, but she then decided she wanted to access the files from her office as well. The NAS software supplied by the vendor could supposedly do this, but it didn't work, probably due to aggressive port filtering by her cable ISP. The vendor wouldn't tell me what ports were needed or if the service could be configured to alternate ports, and there was no way on the box to see what the software was trying to do. On a homebrew box, a quick look at the output from tcpdump and lsof, followed by a quick config file edit, would have gotten things going.

Lastly, how long a service life do you expect from the NAS? Will the vendor still support that model in 5 year for things like security fixes? How about migration to a newer generation from that vendor? Indeed, will that vendor (or their NAS division) still be around in 5 years?

All of the above is completely irrelevant if you're not going to take an active role in maintaining and updating a DIY NAS. It can be impacted by many of the same security holes as a commercial unit can have, so you need to keep up with things. I maintain dozens of FreeBSD systems anyway, so managing 6 FreeBSD-based NAS boxes isn't much extra work at all. Actually, far less than to remember to check the vendor's website for updates to my friend's commercial NAS.

Software Raid will offer more flexibility at lower cost. Hardware Raid offers slighty better performance and better reliability. but at a cost.
I find that ZFS on a bunch of drives is faster than the hardware RAID on the 3Ware cards. Part of that is because I have 40GB or so of cache memory, compared to less than 1GB on the controller. Part is because I have a 256GB SSD to level out the write load on the array. You also get lots of additional features (compression, deduplication, storing redundant copies of files, individual file checksumming, snapshots, and so on) with ZFS. I have my 16 2TB drives configured as single drive volumes exported individually to FreeBSD. That gets me the best of both worlds - ZFS can do the "heavy lifting", but once ZFS decides that data needs to go to a particular drive, it hands it to the 3Ware which has its own cache and error management.

No Reason to go less than 2GHz CPU: additional applications such as Bittorrent, DLNA media serving, or transcoding capabilities might need more.
If you're going to use ZFS, plan on having lots of memory. 8GB is a good place to start. 4GB is not enough for ZFS.

Where are you going to be running your NAS? If in the living room or bedroom the quality of the cooling, and noise becomes a major design constraint. You may need a better quieter case.
This is one place where my RAIDzilla 2 really falls short. Since it is in a dedicated server room (my downstairs dining room), it doesn't bother me that much. During a hard reset or reboot, the fans run at full speed until the BMC takes control of them. During this time (less than a minute), it sounds like a plane taking off from an aircraft carrier.

Two NICs, ideally Intel, SuperMicro/Intel MB with dual gigabit NICs. Two, because in the future you may want to aggregate your NICs to maximize available bandwidth.
Note that link aggregation doesn't help traffic between the NAS and a single client - aggregation relies on various combinations of source and destination info which will be constant for a given client connection. If your NAS will be supporting multiple clients, then aggregation may be helpful. Make sure that your NAS can still handle more than 1GBit/sec with multiple clients - a server with limited memory for buffering may be slowed down greatly due to disk seeks.

Pay attention to HBA specs, you may end up with a card that only supports 2TB SATA II drives, when you were hoping to someday upgrade to SATA III 4TB drives.
Hopefully the bus (PCI-E, etc.) you've selected will still be in common use when you're ready to upgrade, and you'll be able to buy a newer (still possibly used) card that will handle larger / faster drives. In any case, doing an upgrade like this "in place" can be tricky, regardless of whether you're using a commercial NAS or a DIY one. Ideally you'd have a whole second unit and copy the data over and do a lot of testing before placing it into production.

Using a SSD for your system drive can offer significant performance advantages, especially for Caching.

If you use a USB Flash drive for booting ( FreeNAS ), keep a hot spare ready in case of failure.
Make sure the OS you're using takes precautions to not continually write to the SSD / USB flash drive. I know of a number of people who used PCI CF adapters and were going through CF cards like crazy because the OS likes to write lots and lots of log files, and the card didn't have a good wear leveling algorithm.
 
In my personal opinion, if you want 1 or 2 drive network storage, bypass a separate server all together, add those drives to your desktop or HTPC, and use mirroring software (Acronis, SyncBack) to mirror the storage.
This is a very good idea. There's no additional software to learn, as you can manage the drives using your desktop / HTPC operating system. If that system will be powered on all the time, you can share the contents over the network using the native OS tools, and any necessary transcoding / formatting routines for things like DLNA can run on that system as well.

You might consider USB external drives. They can be very small and completely powered by USB. If you get more than one, you can copy from one to the other and then keep the second one at work or a friend's house (most drives offer encryption if privacy is an issue) or even put it in a safe deposit box. All that's needed for disaster recovery is another computer with a USB port (and knowledge of your password if you used one). Of course, don't forget to keep your backup up-to-date - it is easy to have it become "out of sight, out of mind".

Another possibility for backup is one of the cloud-based services. That is best for data that doesn't change often, as the upload path between you and the cloud will be a very slow (compared to Ethernet) connection and your ISP may limit the amount of traffic you can send. Some of the cloud services apparently let you send them your data on a hard drive or optical media.

With a cloud service, you need to think carefully about how you are going to safeguard your data against viewing by others (if you don't pre-encrypt it yourself before you put it in the cloud, your cloud provider will know your encryption key) as well as what happens if the cloud provider goes temporarily or permanently offline. A temporary outage is an inconvenience; a permanent one can be a disaster if that's the only copy you have of something you need.

A lot of the issues I'm talking about are backup / data protection issues. Havling loads and loads of storage normally isn't useful if there's no protection from data loss or corruption. I once read an article where the author said something like "If you don't have a tested backup plan, why not just delete your files now - that is a lot cheaper than buying storage."

In the medium-old days, people could just burn a bunch of CDs with the contents of the hard drive, since the size ratio of a CD-ROM to an average hard drive wasn't that huge - 600MB-ish on a CD vs. a 10GB hard drive. Hard drives are now 200x larger than that (2TB) but the largest reasonably-priced optical media is the single-layer Blu-ray at 25GB.

Tape is not a practical solution for the average home user - modern drives are just too expensive and will require multiple tape changes to perform a full backup.
 
Another issue to consider with a commercial unit is that some manufacturers seem to concentrate on making an attractive case and a flashy GUI, and performance and reliability take a back seat to that.

I have found consumer RAID servers to offer excellent performance, the TCP/IP stack and RAID processing have been significantly tuned, tuning that is often out of the reach of the average builder.

Beyond that, I agree, consumer NASes are not well positioned to handle significant overloading, Transcoding, Encryption, and other applications are not well suited to a box whose reason for being is serving files over the network. If you want to overload your NAS, building your own is wise - even better would be building/buying a separate box to do those tasks.

I was a strong fan of the technical capabilities of Thecus, one of the better NAS providers - until they started shipping NASes with HDMI ports. Is it a dessert topping or a floor wax?


Lastly, how long a service life do you expect from the NAS? Will the vendor still support that model in 5 year for things like security fixes? How about migration to a newer generation from that vendor? Indeed, will that vendor (or their NAS division) still be around in 5 years?

An upgrade path is a wonderful thing, something you can't expect from an appliance. Part of the trade off of going commercial.

All of the above is completely irrelevant if you're not going to take an active role in maintaining and updating a DIY NAS.

There are down sides either way, upgrading Ol' Shuck to the most recent version of OpenFiler is not an option, the fibre channel support software has completely changed with the new version, and I do not feel confident that it has the same reliability as the older version, it is definitely not as well documented. But since Ol' Shuck last saw downtime was over 4 months ago, when the power cord was accidentally pulled, and is absolutely rock solid, I see no compelling reason to upgrade it.

In the case of ZFS on FreeBSD, there has been grumbling about performance issues under Version 9.

I find that ZFS on a bunch of drives is faster than the hardware RAID on the 3Ware cards...

I too prefer this two tier approach, ZFS with RAID card presenting JBOD disks. Though it should be noted that this is not recommended, there are issues of drive dropping / time outs when ZFS does not have direct control of the disk. I've never seen a problem, but run FreeNAS/ZFS on top ESXi, which means it is twice removed from the disk.


Note that link aggregation doesn't help traffic between the NAS and a single client - aggregation relies on various combinations of source and destination info which will be constant for a given client connection.

I also didn't point out that a managed switch is needed to support aggregation.


Hopefully the bus (PCI-E, etc.) you've selected will still be in common use when you're ready to upgrade, and you'll be able to buy a newer (still possibly used) card that will handle larger / faster drives.

Interestingly, it is PCIe's superseding of PCI-X that made it possible for me to build Ol' Shuck - There is a load of used enterprise hardware for sale on EBay that otherwise would have broken the bank to use.
 
I have found consumer RAID servers to offer excellent performance, the TCP/IP stack and RAID processing have been significantly tuned, tuning that is often out of the reach of the average builder.
Agreed. The RAIDzilla 2 probably has at least 50% more disk I/O performance than I'm getting out of it now. But at over 500MByte/sec continuous, it is far faster than what a GigE can handle. So I stopped tuning before things got really esoteric.

Beyond that, I agree, consumer NASes are not well positioned to handle significant overloading, Transcoding, Encryption, and other applications are not well suited to a box whose reason for being is serving files over the network. If you want to overload your NAS, building your own is wise - even better would be building/buying a separate box to do those tasks.
The issue often ends up being whether it is faster to ship the data from the NAS to another box, process it there, and return the results to the NAS. In that case, the Ethernet can be a bottleneck IF you have the computational horsepower in the NAS.

An upgrade path is a wonderful thing, something you can't expect from an appliance. Part of the trade off of going commercial.
I was actually thinking of things like standardized hot-swap trays. If someone buys a 2-bay Brand X server today, will they be able to put those trays and drives in 2 bays of next year's 5-bay Brand X server? Aside from the trays electrically and physically fitting, will the Brand X NAS OS recognize the older drives and work properly?

There are down sides either way, upgrading Ol' Shuck to the most recent version of OpenFiler is not an option, the fibre channel support software has completely changed with the new version, and I do not feel confident that it has the same reliability as the older version, it is definitely not as well documented. But since Ol' Shuck last saw downtime was over 4 months ago, when the power cord was accidentally pulled, and is absolutely rock solid, I see no compelling reason to upgrade it.
I was talking about security problems being discovered in one or more of the NAS software components. Hopefully a commercial vendor will release a new version of their NAS software which includes the fix. If you build your own NAS from various software packages (as I do), you need to keep an eye on the updates for those packages. I expect the non-commercial "all-in-one" NAS products fall somewhere between those two endpoints.

I'll add another post later concerning ZFS performance and configuration - I was quite far along with a detailed reply when my web browser decided to make it vanish.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top