What's new
  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

How to: 10GbE switch to server project?

Busto963

Occasional Visitor
Forget practicality.

Forget economics.

This is a case of a hobbiest about to stick his toe into SMB gear.

I scored a Zyxel switch with 10GbE SFP+ ports at a liquidation - what would be the most straight forward way to connect this switch to a small server running Linux?

I am assuming 10 GbE over copper would be the solution of choice, an SFP module, and a 10GbE nic will get me going, but having never dealt with this equipment, I am scratching my head.

*If* I go for this project, I know I am probably not going to come near to using its full capacity. I just want to know if this is doable for a guy who is willing to learn, or is this a bottomless pit that I regret getting involved with?
 
You didn't say what OS you are using on the "server". Desktop windows doesn't make 10GE very easy. Windows Server and Linux server versions are OK. I don't know about desktop Linux versions.
 
You didn't say what OS you are using on the "server". Desktop windows doesn't make 10GE very easy. Windows Server and Linux server versions are OK. I don't know about desktop Linux versions.
Ubuntu Server.

I also note that the cost of the Intel X540 , while much much cheaper than the "grand per port" cost of 10GbE switch, is still a reason for pause.

With FC HBAs selling on E-bay for $100, I hoped for a better price. I would be tempted when the price drops below $200.

Also, while the X540 seems to have much lower power consumption, it is a much bigger factor than I had thought.
 
Last edited:
I scored a Zyxel switch with 10GbE SFP+ ports at a liquidation - what would be the most straight forward way to connect this switch to a small server running Linux?

I am assuming 10 GbE over copper would be the solution of choice, an SFP module, and a 10GbE nic will get me going, but having never dealt with this equipment, I am scratching my head.
I'm not aware of any SFP+ modules that have a normal Category 6/RJ45 jack as the external interface. That doesn't mean there aren't any, of course. But the prevailing theory seems to be that getting copper out of a SFP+ means some sort of minimum-length, prefabricated, oddball cable for switch stacking.

I'll second the Intel X540 series of cards. I'm using a boatload of them with a Dell PowerConnect 8024 switch (not 8024F). I would have gone Cisco, but I couldn't possibly justify the 5x higher price.

However - you may be forced to use fiber to get to your switch, and that means a fiber NIC for your hosts, either a fixed-config card or one with its own SFP+ socket. According to the Intel release notes for their X540 drivers, they are one of the vendors that DOES practice "lock-in" by only allowing specific SFP+ brands/models. Consult the release notes for the exact list. That can push the price of this way out of the range of "reasonable".

The Netgear XS708E is a "smart" switch (manageable only with an Adobe Air application) with 8 copper ports (one is a combo with an SFP+). It comes in at around $100/port and might be a more cost-effective solution for you than trying to make this work with fiber. Remember, a pair of X540-T1 copper cards is going to run you $650 or more already, while a single X520-SR1 will cost about the same as two X540-T1 cards.

The issue is that most of the "Gigabit switch with one or two 10 Gigabit ports" products are targeted at "top of rack" use, connecting a rack full of Gigabit equipment to a main switch elsewhere in the room via a 10 Gigabit link. That means that out-of-rack link is done with fiber, hence the lack of 10 Gigabit copper ports.

That seems to be the prevailing system today - there aren't many fixed-configuration mostly-copper 10 Gigabit switches - the 8024 I'm using is one (24 copper ports, 4 of which are shared with SFP+ ports). The 8024F (24 SFP+ ports, 4 shared with copper) is much more common.

The situation with modular switches is even worse - you can stuff them with copper modules, but the fact that it is modular means you're paying an arm and a leg (or two) for a feature you don't care about.

*If* I go for this project, I know I am probably not going to come near to using its full capacity. I just want to know if this is doable for a guy who is willing to learn, or is this a bottomless pit that I regret getting involved with?
You'd be surprised. With no tuning, I get iperf results of 9.89Gbit/sec (single TCP connection). Actual performance may vary, etc.
 
I'll second the Intel X540 series of cards. I'm using a boatload of them with a Dell PowerConnect 8024 switch (not 8024F)....
That is helpful to know - thanks. Are you by any chance using these in any work stations?

Not relevant to my server to switch project, but I am curious if these cards really run under windows or other non server OS.

However - you may be forced to use fiber to get to your switch, and that means a fiber NIC for your hosts, either a fixed-config card or one with its own SFP+ socket. According to the Intel release notes for their X540 drivers, they are one of the vendors that DOES practice "lock-in" by only allowing specific SFP+ brands/models. Consult the release notes for the exact list. That can push the price of this way out of the range of "reasonable".
Well that is a potential gotcha. Again thanks for the warning.

I contacted Zyxel to see if their support had any recommendations.

It seems that I need to look to Intel based on you advice. I never would have figured that they might be the issue.

The Netgear XS708E is a "smart" switch (manageable only with an Adobe Air application) with 8 copper ports (one is a combo with an SFP+). It comes in at around $100/port and might be a more cost-effective solution for you than trying to make this work with fiber.
Well I am in this position because I have the Zyxel switch in hand and got it at a fire sale price. I do plan to buy another switch.

This is project, but I cannot afford to go from expensive fun to outrageous.

I am thinking that if I cannot get this working with a single X540, the transceiver modules, and maybe the odd cable, I am going to hook the switch up with aggregated GbE and be happy that I got a very nice switch for little coin and wait for the future when 10GbE becomes more affordable.

You'd be surprised. With no tuning, I get iperf results of 9.89Gbit/sec (single TCP connection). Actual performance may vary, etc.
Well that is motivational!
 
Last edited:
That is helpful to know - thanks. Are you by any chance using these in any work stations?

Not relevant to my server to switch project, but I am curious if these cards really run under windows or other non server OS.
I'm using them on various hardware, all running FreeBSD. I plan on trying one in a Dell Optiplex with Windows 7, but since mine only have one slot better than PCIe x1, I will need to take the video card out (and use the built-in low-end video) to see how it performs.

Well that is a potential gotcha. Again thanks for the warning.

I contacted Zyxel to see if their support had any recommendations.

It seems that I need to look to Intel based on you advice. I never would have figured that they might be the issue.
You only need to worry about the device the SFP (or SFP+) is plugged into. So a board from Vendor A might require one brand of module, while a switch from Vendor B might require a different brand of module. As long as both support the same fiber standard (MM vs. SM LR, SM XR, etc.) the link between them should interoperate. The PITA is that this requires customers to stock more than one brand of the same thing as spare parts. And of course that the vendor lock-in part almost always costs a lot more than the generic one. Cisco gear is popular enough that most random SFP's I've encountered actually have the magic checksum (that Cisco uses) programmed.

And you have to be very careful to not only match the brand, but also the model of SFP+. I bought some Dell-branded SFP+ modules for my 8024, and they didn't work. It turns out they were dual-speed (1Gb or 10Gb) and while the 8024 supports either speed, it doesn't deal with SFP+ modules that answer "Yes" when queried "Are you 1Gb or 10Gb?" [That's a simplification - the actual configuration data lives in SEEPROM on the SFP.]

Things are actually a lot better these days. I've got equipment that uses XENPAK modules (those things could be used as artillery shells if they were more aerodynamic) and some that use X2 (which is essentially XENPAK but with a totally bizarre insertion - removal mechanism - you have to pull on the connector shell to physically pull it out of the X2 to get the X2 to release).

I am thinking that if I cannot get this working with a single X540, the transceiver modules, and maybe the odd cable, I am going to hook the switch up with aggregated GbE and be happy that I got a very nice switch for little coin and wait for the future when 10GbE becomes more affordable.
What are you going to talk to at 10 Gigabit? People generally have 2 or more devices connected.
 
Well that is motivational!
Indeed:

Code:
(0:323) client-box:/tmp# iperf -c server-box
------------------------------------------------------------
Client connecting to server-box, TCP port 5001
TCP window size: 32.0 KByte (default)
------------------------------------------------------------
[  3] local 10.20.30.101 port 35071 connected with 10.20.30.104 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  11.5 GBytes  9.89 Gbits/sec
 
Well, sadly Zyxel does not support 10GbE copper SFP + transceivers, so this project seems to be still born.
 

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Back
Top