I scored a Zyxel switch with 10GbE SFP+ ports at a liquidation - what would be the most straight forward way to connect this switch to a small server running Linux?
I am assuming 10 GbE over copper would be the solution of choice, an SFP module, and a 10GbE nic will get me going, but having never dealt with this equipment, I am scratching my head.
I'm not aware of any SFP+ modules that have a normal Category 6/RJ45 jack as the external interface. That doesn't mean there aren't any, of course. But the prevailing theory seems to be that getting copper out of a SFP+ means some sort of minimum-length, prefabricated, oddball cable for switch stacking.
I'll second the Intel X540 series of cards. I'm using a boatload of them with a Dell PowerConnect 8024 switch (not 8024F). I would have gone Cisco, but I couldn't possibly justify the 5x higher price.
However - you may be forced to use fiber to get to your switch, and that means a fiber NIC for your hosts, either a fixed-config card or one with its own SFP+ socket. According to the Intel release notes for their X540 drivers, they are one of the vendors that DOES practice "lock-in" by only allowing specific SFP+ brands/models. Consult the release notes for the exact list. That can push the price of this way out of the range of "reasonable".
The Netgear XS708E is a "smart" switch (manageable only with an Adobe Air application) with 8 copper ports (one is a combo with an SFP+). It comes in at around $100/port and might be a more cost-effective solution for you than trying to make this work with fiber. Remember, a pair of X540-T1 copper cards is going to run you $650 or more already, while a
single X520-SR1 will cost about the same as two X540-T1 cards.
The issue is that most of the "Gigabit switch with one or two 10 Gigabit ports" products are targeted at "top of rack" use, connecting a rack full of Gigabit equipment to a main switch elsewhere in the room via a 10 Gigabit link. That means that out-of-rack link is done with fiber, hence the lack of 10 Gigabit copper ports.
That seems to be the prevailing system today - there aren't many fixed-configuration mostly-copper 10 Gigabit switches - the 8024 I'm using is one (24 copper ports, 4 of which are shared with SFP+ ports). The 8024F (24 SFP+ ports, 4 shared with copper) is
much more common.
The situation with modular switches is even worse - you can stuff them with copper modules, but the fact that it is modular means you're paying an arm and a leg (or two) for a feature you don't care about.
*If* I go for this project, I know I am probably not going to come near to using its full capacity. I just want to know if this is doable for a guy who is willing to learn, or is this a bottomless pit that I regret getting involved with?
You'd be surprised. With no tuning, I get iperf results of 9.89Gbit/sec (single TCP connection). Actual performance may vary, etc.