What's new

How To Build a Cheap Petabyte Server: Revisited

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

GregN

Senior Member
Be nice to see performance figures.

Each SATA port handles five drives, what kind of hit does the that kind of multiplying have on SATA performance?

Each PCIe lane can only handle what 2.5Gbs? Spread across 15 drives at 3Gbs each, across a 3Gbs channel, without the cache of a raid controller, there must be alot of blocking.

Have I got that right?
 
Last edited:
According to the Sil spec sheet for that controller, it is capable of 889MB/s transfer rate when 20 disks are connected (port multiplier mode). That's when the card is run in PCIe x8 mode. Unfortunately this Syba card is 1.0a x1 so its limited to 250MB/s to the bus.

http://www.siliconimage.com/docs/pb3124_pb_draft 090209.pdf

I think at the very least they could have splurged for a PCIe2.0 x1 card which would have been 500MB/s. I believe the SuperMicro board supports PCIe2.0. If not another model does.
 
Last edited:
You guys do bring up a good point about the overall performance. It would be great to see the performance but remember they only have a single gigabit connection to each "pod". So even with the low end Syba cards the bottleneck is still most likely the network connection. From the sounds of it the goal is to provide massive amounts of storage with good enough performance. I do have to wonder why they didn't use an OS that supports ZFS though. My thought would be for this amount of storage it would be much easier to manage using ZFS.

00Roush
 
Greg: High performance isn't a focus since the data is coming from Internet uplink. The system is plenty fast to handle the I/O that is is required to do.

00Roush: I updated the article with Backblaze's response on ZFS. Basically, they would have had to switch OSes and they have no other need to do that. As the first article noted, Debian was chosen because it is "truly free".
 
Greg: High performance isn't a focus since the data is coming from Internet uplink. The system is plenty fast to handle the I/O that is is required to do.

00Roush: I updated the article with Backblaze's response on ZFS. Basically, they would have had to switch OSes and they have no other need to do that. As the first article noted, Debian was chosen because it is "truly free".

Understood. I'm curious because this could be a pattern for high capacity home builds. When building Ol'Shuck I looked at SATA cards vs. RAID cards - with the SATA cards I'd have to use software RAID. One of the issues was high capacity SATA cards were just as expensive and intelligent RAID controllers for PCI-X ( PCI-X required for a cheap fibre card ), and everything I read indicated that the RAID controllers were more reliable (I can attest to this with Openfiler). So the decision was a no brainer, use RAID controllers.

Looking at the BackBlaze sol'n, it would be possible to use a single SATA controller ( preferably PCIe 2 x8 or x16 ) with four port multipliers to get the same number of drives, same capacity as Ol'Shuck (20 drives, limited by the case ). That combined with a ZFS Raid (OpenIndiana, Nexenta, etc ) might be a real low cost NAS alternative.

Using a single SATA card would mean you could use mini-ITX Atom Motherboard, further driving down costs. By my rough calculations, a platform that supported 20 drives with a hot swap case, sans drives would come in at less that $500. Meaning you could build a 40TB NAS ( 2TB drives ) for less than $3K complete. A lower per terabyte cost than any of the commercial NASes. Ol'Shuck fully populated is about $500 more, but as someone on another thread pointed out, Ol'Shuck is cobbled with some used components ( MB, Processors, Memory and HBAs ) - this would be all new, less risk.

But the fly in the ointment would be performance, what kind of performance hit would you take having 20 drives on a single SATA card?
 
Last edited:
But the fly in the ointment would be performance, what kind of performance hit would you take having 20 drives on a single SATA card?
Good question, Greg. I smell another article. :)
 
A lot of numbers about storage size, dimensions, electricity, costs, etc. but none seen about speed... is so terrible compared to other solutions? (hopefully not!! ;)
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top