What's new

Build Your Own Fibre Channel SAN

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

The NASPT test is an attempt to emulate real-world performance, and it largely succeeds.

Generally the best NAS performance is achieved copying from the NAS a single very large file - caching, read ahead, and a single network circuit all come into play to offer the whole likety split experience.

Link aggregation would not impact that performance though, a single session ( that virtual circuit formed to do the copy) is assigned to a interface, and is limited to the performance that interface offers. You don't get 2x bandwidth for a single session/action with link aggregation, that is why I mentioned multi-node environments.
 
SAN and ESXi question

First of all thank you for the excellent write up, both with part I and II!!

I've been running a FreeNas for quite some time serving up 11TB (formatted space) to my home network and it has served me very well. I use it mainly to host all of my movies (HD), music, photos, software repository, etc.

I have a 2 node ESXi cluster but each has their own local storage and I can't vmotion anything unless it's powered off.

I've been considering building a second NAS and configuring it for iSCSI so I can have some shared storage and can vmotion live VM's but fear it'll be too slow being as my NAS hardware is nothing more than a desktop type motherboard with a software RAID5.

I'd like to build a SAN for my ESXi but am picking your brain on how to do this without the DAS. I'd like to be a HBA in each ESXi node and direct connect it back to the SAN storage. Would I need 2 HBA's in the SAN, one for each ESXi node?

I figure from there I can build VM's and present the storage from there for some shares as well as vmotion since the storage isn't local.
 
Multiple Computers using Same SAN?

How would you configure a SAN like you have made here to allow multiple computers to access the disk storage. Like others, I am particularly interested in this type of activity because it would allow me to move hosts in my VM Farm.

Would I need a switch, or could I link the hosts in a "bus-like" configuration of Fiber?
Can you recommend a switch? I am thinking I'd prefer something more like 10GBps (or more, depending on costs).

How does this type of a solution compare to an iSCSI implementation? There is an iSCSI initiator already available for windows, I believe it is from Microsoft...

John
 
How would you configure a SAN like you have made here to allow multiple computers to access the disk storage. Like others, I am particularly interested in this type of activity because it would allow me to move hosts in my VM Farm.

Would I need a switch, or could I link the hosts in a "bus-like" configuration of Fiber?
Can you recommend a switch? I am thinking I'd prefer something more like 10GBps (or more, depending on costs).

How does this type of a solution compare to an iSCSI implementation? There is an iSCSI initiator already available for windows, I believe it is from Microsoft...

John

Yes, you can use a fiber switch, I can't really recommend one, I don't have any experience using them. They are not cheap.

The approach used in the article was peer-to-peer, using a DAS to front any disks. The initiator was the QLogic's, part of the driver / firmware software for the Fiber HBA in the DAS.

You would need a switch, and an HBA for every machine that you want to have access the SAN. One of the things that made the project possible is buying of two used and inexpensive HBAs ( one PCI-X, one PCIe ), as stated in the article 4GB appears to be the spot for there type of cards. 10GB and beyond are still very expensive, especially when looking at PCIe. In the project one of the most expensive components was the PCIe HBA. If you are doing server to server, you should consider PCI-X to keep prices down, you'll find the older interface is significantly better represented and cheaper on eBay.

In the language of Fibre Channel, what you are proposing is create the FC fabric for your SAN.

The MS Initiator is for iSCSI, it doesn't work with Fibre Channel. The cheapest solution for multinode VM utilisation would be iSCSI, 10GBE HBAs are falling in price. You should though consider running a separate LAN network for iSCSI ( dual NIC one your 10G, the other the MB's nic), otherwise you will be competing with internet traffic.

The advantage of FC vs iSCSI is well stated in the article, iSCSI is on top of TCP/IP a general purpose protocol, FC is a strictly storage oriented protocol, with significantly fewer layers, and doesn't have the same overhead ( for example addressing is much, much simpler in FC )
 
First of all thank you for the excellent write up, both with part I and II!!

I've been running a FreeNas for quite some time serving up 11TB (formatted space) to my home network and it has served me very well. I use it mainly to host all of my movies (HD), music, photos, software repository, etc.

I have a 2 node ESXi cluster but each has their own local storage and I can't vmotion anything unless it's powered off.

I've been considering building a second NAS and configuring it for iSCSI so I can have some shared storage and can vmotion live VM's but fear it'll be too slow being as my NAS hardware is nothing more than a desktop type motherboard with a software RAID5.

I'd like to build a SAN for my ESXi but am picking your brain on how to do this without the DAS. I'd like to be a HBA in each ESXi node and direct connect it back to the SAN storage. Would I need 2 HBA's in the SAN, one for each ESXi node?

I figure from there I can build VM's and present the storage from there for some shares as well as vmotion since the storage isn't local.

To accomplish what you are talking about you would need a LC fiber switch. One HBA per machine (3 total), all connecting to the switch. Remember with a SAN the client ( or in FC/iSCSI parlance, initiator) creates and manages the filesystem.

Why use vmotion? Why not just do copying and cloning on your storage server, your NAS? You'd only be limited to disk transfer speeds local to the server.
 
Last edited:
Require Advise

hi all

i went through the article how to build you own NAS but i was not able to source the right motherboard but instead i got this model is it possible to build that mentioned using this motherboard


SuperMicro X6DHP-8G

regards
Tiger
 
Hi Greg

hi

first let me thank you for this wonderful article it has helped me a lot in preparing my self to build my own NAS

i have almost got all the items required for building my own NAS

1. SUPERMICRO X6DH8-G2/ DUAL 3.8G/4GB RAM
2. 3Ware 9550SX-12SI AMCC 12Ports 3.0Gb/s SATA II PCI-X RAID Controller
3. Cooler Master GX 550W (RS-550-ACAA-E3) Power Supply
4. 6* Seagate Barracuda 2 TB Desktop Internal Hard Drive (ST2000DM001) HDD
5. NZXT Source 210 Elite Cabinet
6. Kingston SSDNow V100 64 GB SSD Internal Hard Drive (SV100S2/64G)


i have only one question ? currently i can purchase only 6 HDD but i have options for another 6 more can i later scale this same build from 6 to 12 is that possible

if not what is the alternative

please do help me out here

regards
Tiger
 
Expanding your amount of disks later really depends on the OS your choose to use. What is the amount of storage you are looking for? Do you need all 12 disks to be one large disk? How important is your data... how much redundancy do you want?

My opinion is using ZFS is the easiest if you are planning on adding disks in batches. You could setup a storage pool with the first 6 disks as a RAIDZ device (RAID 5) and have 10 TB of raw storage. Then later you can add the other 6 drives as a another RAIDZ device to the original storage pool. This effectively doubles the size of the storage pool and then the storage pool acts similar to a RAID 50 array at least for all new data written. Common OSes that are setup for NAS/SAN use and have ZFS are NAS4Free, FreeNAS, and Nexenta. So far I have found NAS4Free is giving me the best performance with 4GB RAM (Virtual Machine) I have been testing here at home.

There are other OSes that will do expansion but I don't really have much experience with them so I can't comment.

Hope that helps a little. Let us know how it goes.

00Roush
 
Hi 00Roush

thank you for your reply, you have seen my specs i was thinking of using freeNAS but after you did mention NAS4free i locked up and got better reviews about it so i am thinking of using NAS4free

thanks :)

i was thinking of having a big NAS 12 hdd one setup RAID 50 was my idea,

I do not know anything about ZFS, i am a total newbie i looked up this tutorial on small net builder and started to make my own, i have no clue where to start

like you did mentioned i can add using ZFS and add hdd will it look like one big HDD or 2 Big HDD

regards
 
Bump!

Some time has passed since this article and thread was posted and I am curious:

1) How has the SAN project worked over time?

2) Given that FC products continue to drop in price, what would you differently?

3) Is it possible to use an FC HBA in place of a NIC on a windows box (say Windows 8 pro)? This is beyond the scope of the article, but the prices are dirt cheap for fast HBAs.
 
The original article is outdated. Prices have dropped. Openfiler is being ported to CentOS. Fibre channel SAN appears to be less relevant. The Norco RPC-4224 is POS. I could go on but the message is clear.

I'm using FreeNAS v9.2.1, Xeon E-1230 v2 CPU, Supermicro X9SCM-F-O m/b, 32GB ECC RAM, three LSI 9211-8i controller cards flashed as HBA. Storage is six 2TB Seagate configured as RAIDZ2. Case is the aforementioned POS Norco RPC-4224.

Unfortunately, this topic doesn't generate much interest. The stock answer seems to be buy QNAP/Synology. I prefer to build my own solution.
 
Looking for updated guide/specs

I've been looking around the web for a guide on how to build a low-cost complete FibreChannel SAN - storage, two switches, HBAs, etc. I need a true FibreChannel SAN for a home LAB - VMware. I realize the original article is relatively old so is there one that is more contemporary? Thank you.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top