What's new

am I silly for wanting Fibrechannel or/and Infiniband?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Twice_Shy

Occasional Visitor
Since i'm slowly working through the process of designing up a home NAS/SAN which I hope to grow over time, and I see that "this issue" is a big one affecting which NAS software is even an option I decided to re-ask it sort of head on...

Currently faster-than-gigabit Infiniband and FibreChannel hardware is available used on ebay for a pittance. Much cheaper than 10gig Ethernet hardware at least for HBA's. But even if 10gig Ethernet hardware drops in price soon is that a superior solution then?

I'm told one issue is the overhead of TCP IP and such so that 10gig Infiniband for instance will always be faster than 10gig Ethernet. How much that matters probably depends on the scenario - is 4gig FibreChannel ever going to be faster than 10gig Ethernet due to lacking that overhead? (i'm doubting but who knows)

Future expandability will always be in favor of IB/FC which already of course have much faster and lower latency systems built. Even if we start saturating 10gig links with ramdisks and SSD's i'm pretty sure the next round of faster-than-10gig hardware will then roll out again putting the performance crown back there.


Am I correct or incorrect in assuming there are any security benefits of using Infiniband/Fibrechannel? Part of the idea being TCP IP and Ethernet just seem so emminently hackable and i'm assuming the standards difference and not being routable(?) puts up a barrier to the kind of normal malware stuff. I am aware of features like automatic failover, combining 2+ links for faster bandwidth, and different networking topographies also as options but assumed none of that is inherently prevented from being on Ethernet, either the software implements such a feature or it doesn't. Perhaps the only other remaining question is are there any non-networked storage solutions for which FC/IB are still a killer app and always will be or are likely to be, even if it's down to "certain clustering software only works with that and nobody will rewrite it" kind of things. Ie - side nerd reasons to still consider it anyways if all else is equal.


In short are there reasons beyond raw performance that still make IB/FC worth considering, or will a newer 10gigE NAS box with a RAID array of SSD's not only perform just as good but also be just as securable and such if properly set up?
 
One month later and no opinions? :p The article that I believe led me here in the firstplace was the budget home FibreChannel SAN/NAS, I was hoping maybe some experts on here would have some insight or that others had replicated or played with such configurations...
 
Hello Twice_Shy,

I have an Enterprise SAN Engineer background. Unfortunately I had to leave the industry completely in 2009 however I still retain a few tidbits of info so perhaps I can help. Infiniband was something being considered however my practical experience with it is zilch. My experience and knowledge is of Brocade, Hitachi, McData, Cisco switches and storage frames, Fabrics, Fibre Channel, replication, etc. My NAS knowledge is a little above basic.

I've reread your post a couple of times and I find myself asking the same 2 questions.

1. What is the purpose of your NAS/SAN and how do you envision it growing?

2. What are your security concerns? You mentioned fiber channel not being routable over IP and you are correct. However if your server/Workstation, which contains both the HBA and a network card, becomes compromised, your risk is the same on either platform.
 
Hello Twice_Shy,

I have an Enterprise SAN Engineer background. Unfortunately I had to leave the industry completely in 2009 however I still retain a few tidbits of info so perhaps I can help. Infiniband was something being considered however my practical experience with it is zilch. My experience and knowledge is of Brocade, Hitachi, McData, Cisco switches and storage frames, Fabrics, Fibre Channel, replication, etc. My NAS knowledge is a little above basic.

I've reread your post a couple of times and I find myself asking the same 2 questions.

1. What is the purpose of your NAS/SAN and how do you envision it growing?

2. What are your security concerns? You mentioned fiber channel not being routable over IP and you are correct. However if your server/Workstation, which contains both the HBA and a network card, becomes compromised, your risk is the same on either platform.

The purpose of the NAS/SAN is a progression of priorities. First cheap storage (the first bottleneck is 'having enough'), second performance (the next bottleneck will be time to do things), then reliability (the third bottleneck is not having downtime/deadlines). Trying to store ridiculous amounts of 8k film footage which may be opportunistically available due to film students having access to a Red Weapon camera and not wanting to throw away anything shot since may not have a chance to get it again at least at that quality.

I dont know how much it will grow but it's about having a plan to be able to grow it in the future. As well as improve future performance. And then reliability in that order as funding becomes available. What i'm planning so far is already looking like at least 60TB online, I have that much sitting in my room right now of old footage and projects and that data will migrate into a server along with new footage that could grow even faster so I expected 300TB as a safe guess.

Main security concern is remote hacker type stuff - not local. I guess worried what happens if a router gets hacked and packets get spied on over the local network. If IB/FC cant route it cant be spied on. If the workstation connected to that is hacked from the beachhead of someone taking over the router, obviously thats a problem, but I was hoping to make that less likely due to virtualization and loading the same frozen image every day from the file server.
 
Expanding slightly on the previous... part of what i'm learning is whether I should ever expect to really need a SAN or whether a NAS will be able to do the job just as well. The biggest use is going to be editing and processing terabytes of high resolution video footage, but there will be other things going on in the background like virtualized workstation sessions (undecided if things like Xen vs VMware and such), or where certain applications may require block level access for whatever reason and not want to work with a network mounted NAS directory.

The interest in Fibrechannel and Infiniband is because it seems like a cheaper way to get 10gig ethernet speeds than 10gig ethernet right now, especially with how many things are dual ported and for how small I expect my clusters to be. (like 3 workstations sharing a single NVME SSD for instance - like an Intel 750 in a main workstation that can also be used by two other computers since a couple FC/IB cards is much cheaper than buying two more SSD's - that idea may be halfbaked, or it may bottleneck enough to reduce usefulness, but I assumed it would always be faster than local hard drives, and i'm trying to find clever little workarounds like that to save a few hundred here, and a few hundred there, because when this is applied over and over on every level of a project hundreds add up to thousands which add up to over ten thousand at some point if I reduce waste in enough ways)

I see things like cheap secondhand FC 4gig/8gig HBA's especially, and Infiniband right up to QDR 40gig and even FDR 56 gig (which will blow away even 10gigE, even if that comes down in speed, although i'm not sure if I can run any networking connection through IB that I want if things expect FC to be used) and I start looking for applications to use it. I'm hoping decade old enterprise hardware can combine with the right bits and pieces of newer breakthroughs (like NVME SSD) that lets us really stretch beyond what a normal desktop would do let alone for a given budget. Even though i'll freely admit sometimes my brainstormings will sound like "creating a tool that i'm trying to find a problem for". I just know that I need to learn about SANs and the rest because were going to be application limited for the most part (no custom or high end software, desktop grade only) and trying to optimize hardware to work around it's limitations. (which for something like Adobe CC means local SSD, DAS, or SAN probably for scratch drive)
 
Since i'm slowly working through the process of designing up a home NAS/SAN which I hope to grow over time, and I see that "this issue" is a big one affecting which NAS software is even an option I decided to re-ask it sort of head on...

Currently faster-than-gigabit Infiniband and FibreChannel hardware is available used on ebay for a pittance. Much cheaper than 10gig Ethernet hardware at least for HBA's. But even if 10gig Ethernet hardware drops in price soon is that a superior solution then?

Going back to the thread - outside of a data center and an engineered solution - yes, you're kinda silly a bit... nothing personal, but some of the items in your consideration are well beyond the scope of what SNB probably is about.

servethehome.com has a few forums that do relate down this path - and exploring eBay for retired data center gear..
 
I think that the future is LAN only: Data centers are moving away from SAN to LAN - I predict that SAN is vanishing in near future!

Keep in mind that actual SAN is only 16 Gbit fast (most older setups only 8 Gbit) and a shared media when it comes to Storage connections or SAN switches.

The problem is that the LAN standards for >10 Gbit are not yet fully defined/stable and this slows down the adoption speed - and gives room for Inifinibad. :oops:

But I know that some of my customers already use "prototype" installations with 40 Gbit LAN for storage connectivity. ;)

I really do not think that this kind of setup makes sense for home user - even SMB's will not invest in this kind of setup - 1 Gbit is the speed cap at the moment for all kind of home/SMB connections! :rolleyes:

PS.: I run CAT6e cabling to all my rooms with currently 1 Gbit networking - but my WLAN connections are up to 866 Mbit (0.9 Gbit) - so the future wireless connectivity (WLAN or 5G with 5/10 Gbit) will even supersede the slow LAN speeds!
I really wonder if all devices may have just have a WLAN or 5G antenna in near future...!? :eek:
 
Last edited:
I don't see SAN's disappearing, but evolving to something more cloud oriented with block/object storage.... (see OpenStack and the various storage modules)

40GbE/100GbE is well standardized, and I had multiple 40GbE fiber links between my Denver and Chicago Data Centers, but the cost there is prohibitive for folks in the SNB networking sector...

NBaseT is likely the next step, as this can be done fairly easy compared to 10GBe, and I think this will be the next step that will be approachable, even though 10GB-Base-T is coming down in prices, the price per port is still fairly high compared to 1GbE.

InfiniBand/FibreChannel/OmniPath have their places - these are more Data Center/HPC oriented, but again, like I mentioned earlier, they're a bit out of scope here except for discussion purposes, as the cost and utility there are pretty focused towards specific applications.
 
its never silly for wanting either. I still remember my sun sparc workstation that has FCAL with 15k drives, faster than sata.

infiniband can be done through SFP direct. I noticed my SFP NIC when using direct allows for infiniband and all the direct accesses it allows.
 
its never silly for wanting either. I still remember my sun sparc workstation that has FCAL with 15k drives, faster than sata.

infiniband can be done through SFP direct. I noticed my SFP NIC when using direct allows for infiniband and all the direct accesses it allows.

They all have benefits, but generally it's better to do ethernet rather than InfiniBand or OmniPath (believe it or not, this is a big area of contention in the High Performance Computing, e.g SuperComputing community)...

FCA has it's place, but again, really outside of the scope of SNB - I've had FCA in place with my IBM/Sun gear, generally works in the data center, but the cost and utility is probably cost-prohibitive, and I would have some concern about lock-in...
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top