What's new

Link Aggregation + Quad Lan + HP Procurve 1800-24G = Win?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

MoralDelima

Occasional Visitor
I was reviewing some of Dennis Wood's posts in the QNAP-509 forum while trying to decide whether to fork out cash for my own NAS (or maybe DAS) rig or to go with a prepackaged setup. His results were really interesting, but I wonder how the following would do:

HP Procurve 1800-24G

ASUS P5BV-C/4L LGA 775 Intel 3200 ATX Server Motherboard - $209

LSI LSI00005-F 64-bit, 133MHz PCI-X SATA II MegaRAID 300-8X Kit 8 Port SATA/300 128MB RAID 0/1/5/10/50 - Retail $399.99


Athena Power CA-SWH01BH8 Black Steel Pedestal Server Case 2 External 5.25" Drive Bays - Retail $241.99

Of course, you'd have to toss in some ram (FBDIMMS, $90 for 2x 2GB) and a proc (2.0ghz dual core for about $70 if you were cheapo). Other then that, slap in 8 or so drives, configure a raid 5 setup with a linux distro and then hook up all four ports to your Procurve and away you go with Link Aggregation. If the chipset supports it, which I can't find but it'd be retarded if it didn't.

I've been tossing NAS / DAS / etc ideas in my head for about 14 hours today so I might be a little off, but can anybody spot any flaws here?
 
Just revisiting this while I have some spare time..

I'd change the RAID card to a super cheap LSI LSI00118 RoHS MegaRAID SAS 8344ELP 8-port Raid controller Flexibility to use SAS or SATA II drives. 3 Gb/s throughput per port Greater than 2TB array support. 128MB cache, RAID 0/1/5/10/50. PCI Express x4. ** OEM Card only **

At $175 for 8 Raid ports, 128MB RAM and HW RAID it is a great deal.

Case would remain the same, as would the motherboard. Although I would be on the lookout for a used version of the MB. I'd also get the CPU off of a forum, used.

Also thinking about seeing what kind of rackmount setup I could swing with this.

=============================

Final setup:

ASUS P5BV-C/4L LGA 775 Intel 3200 ATX Server Motherboard - $209

LSI LSI00118 RoHS MegaRAID SAS 8344ELP 8-port Raid controller Flexibility to use SAS or SATA II drives. 3 Gb/s throughput per port Greater than 2TB array support. 128MB cache, RAID 0/1/5/10/50. PCI Express x4. ** OEM Card only **

Athena Power CA-SWH01BH8 Black Steel Pedestal Server Case 2 External 5.25" Drive Bays - Retail $241.99 (Has 8x hotswap bays)

Core 2 Duo E6400 Socket LGA775 $90 (used)

Transcend 2GB 240-Pin DDR2 FB-DIMM ECC Fully Buffered DDR2 800 (PC2 6400) Server Memory
 
The link aggregation will only make itself evident under loading from multiple workstations. In other words, the LACP will work but you'll only really see benefits if you've got multiple workstations able to receive a sustained 100 MB/s. Assuming you had four needing that simultaneously, then your server disk array would need to sustain 400MB/s real world performance to fully take advantage of four LACP ports. If the four workstations were using single drives they would max out at about 60MB/s each regardless of how fast the NAS was.

So I guess what I'm saying is that a 4 port LACP setup on a server would only be usefull if you need ~400MB/s served out to your workstations. Assuming that load, your internal RAID card would need to be capable of those data rates! Two ports trunked makes sense in terms of the real world measured performance of most RAID cards.
 
The link aggregation will only make itself evident under loading from multiple workstations. In other words, the LACP will work but you'll only really see benefits if you've got multiple workstations able to receive a sustained 100 MB/s. Assuming you had four needing that simultaneously, then your server disk array would need to sustain 400MB/s real world performance to fully take advantage of four LACP ports. If the four workstations were using single drives they would max out at about 60MB/s each regardless of how fast the NAS was.

So I guess what I'm saying is that a 4 port LACP setup on a server would only be usefull if you need ~400MB/s served out to your workstations. Assuming that load, your internal RAID card would need to be capable of those data rates! Two ports trunked makes sense in terms of the real world measured performance of most RAID cards.

actually the array would need to be even faster since performance degrades if the drives need to seek across the drive (which is has to do if multiple streams are beeing read).
 
Agreed....I just threw out 400MB/s as 105MB/s per second is as good as we've seen from any gigabit connection sustained. I suspect there'd be other limitations too of a single PCIe card bus in the real world at those data rates.
 
It would be cheaper to go the AMD platform as they support unbuffered ECC ram from the consumer grade chips. Say middle of the road phenom II tri core X3 and a highpoint 3522 for hardware raid 6 with battery backup on pci express.

Unbuffered ecc is very similar in price to ubuffered non-ecc ram.
 
Have you considered XFS via Solaris as opposed to the RAID card, or the mdadm of Linux.

Another option for the Motherboard, with two quality GBe connections is http://www.supermicro.com/products/motherboard/Xeon3000/3210/X7SB3.cfm. It also has ten SATA ports, 8 on the LSI for your storage array, and two on the intel ich9 for your OS. It might be a little more than the Asus, but you wouldn't need to buy a separate card. It also takes unbuffered non-ECC DDR2-800.*
 
Last edited:

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top