What's new

AM2 based NAS in 2015?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

cyruspy

New Around Here
I have a desktop system built around 2008 which was pretty decent at that time and has a solid tower case:

Asus M2N32-SLI Deluxe
Athlon64 Am2 6400+
4x2GB DDR2

I would like to build a 15 drive NAS using 3 of those 5-in-3 hotplug cages with the onboard SATA ports and additional PCI SATA HBAs.

Can I expect decent power consumption/performance balance from that build or should I look for current low power processors?. It will be mostly a multimedia hub and backup destination.
 
Last edited:
It should work fine and consider using freeNAS too. Make sure to at least configure it as AHCI or RAID in bios.

If you are using a raid/sata card make sure that it supports hotplugging. The card may have its own bios which you may want to configure too.

The important thing to consider is that PCI bus is shared and tops out at 133MB/s so thats 133MB/s total bandwidth for all drives attached to the PCI bus. You might be able to overclock the PCI bus slightly or if you're lucky your system may be able to allocate each PCI bus its own bandwidth. Consider using PCIe or PCI-x where possible.

Performance wise it should be fine but i think it may be a power hog when idle. You might want to measure idle power use with a wattmeter and use drives with very good idle power. Newer systems idle at much lower power and there are also lower power variants of desktop CPUs such as the xeon L series or the ULV variants of desktop CPUs. Dont use intel atoms or similar because they dont have the horsepower to drive NAS with 15 drives if you expect to get really fast throughputs. ECC memory with a file system like ZFS can really help to fix errors too but its not really that important because only file systems like ZFS use a lot of ram and perform comparisons.
 
Last edited:
It depends on what you are doing with it.

A newer Atom will handle it just fine, though if you are using 10GbE and RAID in the thing, no, it won't keep up.

SMB multichannel even, and it should be fine for a couple of gigabits of bandwidth, basic RAID rebuilds, etc.

I'd personally look at an inexpensive H97 board and a Haswell Celeron or Pentium processor.

Odds are excellent you'd cut idle power consumption to a quarter of what that AM2 system will use.

I had an AM2 based SEMPRON (I think it was AM2), which is lower power consumption than what you are looking at. With a single 2TB drive and all board features disabled that I wasn't using, a single 2GB DDR3, and a bronze rated PSU...it was using about 38w at idle with the drive spun down.

My current Ivy Bridge based Celeron with an mini atx H77 board, 8GB of RAM, an SSD boot drive, a pair of 2TB drives in RAID0 and a pair of NICs (before I was using the on board, now I am using a pair of Intel single port NICs) with the exact same PSU, it uses 21w at idle.

That is $17 a year in operation efficiency difference, with more RAM, an extra drive and extra NICs. That old system if it was spec'd similar likely would have been more like 42-45w idle...

Considering how cheap it is to get a "low end" board and a low end processor and as much more power efficient it would be, just do it. For a probably $1000+ build (all those drives) an extra $100 in costs for what might well be $20-30 a year in electrical savings (could be more, especially in anything other than idle, actual operation is likely to be a bigger power delta), you might as well.

Also look for a nice robust PSU.

Also consider fewer higher capacity drives and reasonable spin down timers (15 drives at idle is probably going to burn 80-100w of power, spun down, likely 5-15w of power).

Unless you are looking at 15 salvaged drives of low capacity, that is a MASSIVE amount of storage (even if some of it is duplicate volumes).

My suggestion is, if some of it is RAID5/6, don't. Just have a backup volume to backup the primary volume to. Better redundancy, and if it is a RAID5/6 backup of a RAID5/6 volume, you just saved several drives.

Also, the more drives, the greater the likelihood that one is going to fail. If the volume is already backed-up, preferably with two back-ups, your butt is already covered if one big drive fails.

Three 4TB drives for a primary and three 4TB drives for the back-up volume is going to use a LOT less power and also allow you to have a much smaller PSU than six 2TB drives for primary and six 2TB drives for back-up.

It'll also be quieter, produce less heat and likely cost less to (as 3 and 4TB drives currently seem to be the $ for capacity sweet spot).
 
It should work fine and consider using freeNAS too. Make sure to at least configure it as AHCI or RAID in bios.

If you are using a raid/sata card make sure that it supports hotplugging. The card may have its own bios which you may want to configure too.

The important thing to consider is that PCI bus is shared and tops out at 133MB/s so thats 133MB/s total bandwidth for all drives attached to the PCI bus. You might be able to overclock the PCI bus slightly or if you're lucky your system may be able to allocate each PCI bus its own bandwidth. Consider using PCIe or PCI-x where possible.

Performance wise it should be fine but i think it may be a power hog when idle. You might want to measure idle power use with a wattmeter and use drives with very good idle power. Newer systems idle at much lower power and there are also lower power variants of desktop CPUs such as the xeon L series or the ULV variants of desktop CPUs. Dont use intel atoms or similar because they dont have the horsepower to drive NAS with 15 drives if you expect to get really fast throughputs. ECC memory with a file system like ZFS can really help to fix errors too but its not really that important because only file systems like ZFS use a lot of ram and perform comparisons.

It seems that the onboard Nvidia HBA doesn't support AHCI. There are PCIe slots available, so I'm thinking about using SAS HBAs to connect the disks.

Do you have a magic number for what is acceptable power consumption nowadays?. I would like to spend as little as possible given I already have the basic hardware, but if there's too much difference it might be good idea to change the MB and CPU for something current.
 
It depends on what you are doing with it.

A newer Atom will handle it just fine, though if you are using 10GbE and RAID in the thing, no, it won't keep up.

SMB multichannel even, and it should be fine for a couple of gigabits of bandwidth, basic RAID rebuilds, etc.

I'd personally look at an inexpensive H97 board and a Haswell Celeron or Pentium processor.

Odds are excellent you'd cut idle power consumption to a quarter of what that AM2 system will use.

I had an AM2 based SEMPRON (I think it was AM2), which is lower power consumption than what you are looking at. With a single 2TB drive and all board features disabled that I wasn't using, a single 2GB DDR3, and a bronze rated PSU...it was using about 38w at idle with the drive spun down.

My current Ivy Bridge based Celeron with an mini atx H77 board, 8GB of RAM, an SSD boot drive, a pair of 2TB drives in RAID0 and a pair of NICs (before I was using the on board, now I am using a pair of Intel single port NICs) with the exact same PSU, it uses 21w at idle.

That is $17 a year in operation efficiency difference, with more RAM, an extra drive and extra NICs. That old system if it was spec'd similar likely would have been more like 42-45w idle...

Considering how cheap it is to get a "low end" board and a low end processor and as much more power efficient it would be, just do it. For a probably $1000+ build (all those drives) an extra $100 in costs for what might well be $20-30 a year in electrical savings (could be more, especially in anything other than idle, actual operation is likely to be a bigger power delta), you might as well.

Also look for a nice robust PSU.

Also consider fewer higher capacity drives and reasonable spin down timers (15 drives at idle is probably going to burn 80-100w of power, spun down, likely 5-15w of power).

Unless you are looking at 15 salvaged drives of low capacity, that is a MASSIVE amount of storage (even if some of it is duplicate volumes).

My suggestion is, if some of it is RAID5/6, don't. Just have a backup volume to backup the primary volume to. Better redundancy, and if it is a RAID5/6 backup of a RAID5/6 volume, you just saved several drives.

Also, the more drives, the greater the likelihood that one is going to fail. If the volume is already backed-up, preferably with two back-ups, your butt is already covered if one big drive fails.

Three 4TB drives for a primary and three 4TB drives for the back-up volume is going to use a LOT less power and also allow you to have a much smaller PSU than six 2TB drives for primary and six 2TB drives for back-up.

It'll also be quieter, produce less heat and likely cost less to (as 3 and 4TB drives currently seem to be the $ for capacity sweet spot).

Well, it's going to be used as backup consolidation (many family members), media library and download host maybe. I'm also looking for an Asustor AS5008T in the long run as it'll provide direct HDMI connection, so this PC might become the backup/replicated node.


The idea is to have a big RAID6 with a spare (capacity of 3 disks lost.). 4TB disks at least, 6TB seems interesting also. Thanks a lot for the power consumption figures..
 
Don't do RAID6. Badddddd idea.

If the volume becomes corrupted, you have NO backup. Especially with that many spare disks, you are better of simply buying enough disks to have a completely seperate backup volume, rather than just having spare disks in a RAID volume.

I'd also still spend the minimal amount of money on a new processor and board. You have can a Haswell Celeron processor and a nice, but inexpensive H97 ATX or micro ATX board for around $100-120 which WILL support AHCI and you can also use SATA multipliers instead of needing to get seperate HBAs or RAID cards to support the extra drives. Possibly leading to a cheaper implementation, and DEFFINITELY cheaper for operating expenses. Probably by a lot.
 
Don't do RAID6. Badddddd idea.

If the volume becomes corrupted, you have NO backup. Especially with that many spare disks, you are better of simply buying enough disks to have a completely seperate backup volume, rather than just having spare disks in a RAID volume.

I'd also still spend the minimal amount of money on a new processor and board. You have can a Haswell Celeron processor and a nice, but inexpensive H97 ATX or micro ATX board for around $100-120 which WILL support AHCI and you can also use SATA multipliers instead of needing to get seperate HBAs or RAID cards to support the extra drives. Possibly leading to a cheaper implementation, and DEFFINITELY cheaper for operating expenses. Probably by a lot.

Thanks, it seems I'll go with a new celeron and just discard the current hardware.

Why would you avoid R6?, having 2 parity sets and a spare disks it can't go wrong. I've never seen a dead r6 and certainly I don't want to be moving around folders by hand between disks
 
You don't have to move files/folders between volume by hand. There are any number of automated back-up tools out there. Windows has an okay one built in and you can download a somewhat better one from MS for free (Synchtoy, what I use). Most freeware OS also have scheduled backup tools.

I've seen RAID volumes become corrupted that are not recoverable.

If your volume becomes corrupted, no number of spare disks or parity will save a thing. Maybe some data recovery software might pull back some of the data.

R5/6 also is somewhat processing intensive, whether it is on load on the RAID card or on the host CPU/controller. Performance is normally not all that amazing because of the overhead associated with parity writing and checking.

I don't use it and I don't recall well, but writes tend to be roughly equivelent to RAID0 - one disk, but reads are much slower, or maybe it is the other way around. Anyway, I've seen tests where you might have 4 disks in RAID5 that individually might read/write 150MB/sec each and in RAID0 manage 600MB/sec of read and write performance, but in RAID5 you see something like 425MB/sec of write and 160-180MB/sec of read performance (or maybe it is the other way around) (also just talking sequential numbers, not random).

I'd imagine it depends in part on the controller, etc., etc., but I'd assume RAID6 performance is even worse.

It might be more than enough performance for what you want for all I know, but it is at least a consideration when comparing performance to RAID0.

It also means lower power consumption. If you just have a single volume of, say, 4 disks, when you want to read/write to it, all you have is 4 disks spun up. If that is a RAID6 volume with 7 disks (because 3 spares) you have 7 disks all going at once. If you have two 4 disk volumes, you only have 4 disks spun up, unless you are backing up the volume, then you have all 8 spun up.

JBOD for a volume does not have redundancy like RAID1, 5, 6 or 10 has, but it DOES have some resiliance over RAID0, where in, if a drive does drop dead, you have a reasonably good chance of recovering what might have been on the other drive(s), anything that was actively spanned across the drives is gone though. In RAID0 if a drive drops dead, you are SOL for all of your data.
 
depending on how many drives you have raid 6 isnt a bad idea. A raid volume can get corrupted during rebuild which is why theres raid 6.

For NAS backup comes from having multiple NAS. Perhaps there might be a cloud based NAS solution if you have a number of them for both speed and reliability.

When it comes to backup i use aerofs to sync files on all my systems over LAN. Its basically a LAN only variant of dropbox with file versions limited by your hard drive space.
 
Similar threads

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top