What's new
  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

10GbE networking to fast NAS

toysareforboys

New Around Here
If you are ok in extra $$ for a 10gbe card, you can direct connect two workstations.
I have a monster "NAS" computer (running Windows Server 2012 R2) with 8 x 1TB SSD drives in stripe (approx 8TB total storage, 2gb/sec read/write). I need faster then gigabit now to work easily with 4k media.

Could I install three "Dell RK375 Broadcom BCM957710 Nextreme II 10GBE Network Adapter Cards" in this NAS computer and direct connect them with Cat7 to my three main workstations each with the same card? I can get the cards for $160 each shipped, is that about the best deal for a 10gbe copper card?

Is this the absolute fastest possible way to connect them? Would it be faster than a 10gbe switch when all three machines are accessing the SSD stripe on the NAS computer at the same time? I assumed direct connect would be faster because all three machines won't be sharing the same 10gbe connection to the server, each of them will have its own direct 10gbe pipe, no?

Is there a different distance limitation on direct connected 10gbe Cat7 or is it still 100 meters?

If not going the direct connect route is NETGEAR XS708E a good 10gbe switch?

Thanks for all your great information and testing you provide :)

-Jamie M.
 
You might have fractionally less latency direct connecting because it doesn't have to go through a switch.

All 10GbE switchs I have seen have link aggregation, so you can always put three 10GbE cards in the server, connect it to the switch, turn on link aggregation, team the adapters and have 3x10GbE of bandwidth to the server.

There should be no practical difference between doing that and direct connecting the three workstations to the server, each with its own port.

If using Sever 2013 or Windows 8/8.1 though, you could connect the 3 adapters to a 10GbE switch and SMB Multichannel will make either 3x10GbE or 30GbE of bandwidth available to the server (ONLY FOR SMB CONNECTIONS!) and avoid link aggregation or teaming.

Of course if the workstations are only going to have a single 10GbE adapater in them, might as well do link aggregation/teaming or direct connect.

Still 100 meters.

Just do Cat6a though. There is no practical difference between cat6a and cat7 for 10GbE, both are rated to 100m.

In fact, if you have shorter distances and not an overly harsh EMI environment, just use cat5e or cat6. 45m with 10GbE for the former and 55m with 10GbE for the later.

From all reports, that switch is a good one. I haven't used it. I've barely had the opportunity to touch 10GbE. However, if you don't need to, save some money and go direct connect if you feel you need to have 3x10GbE of bandwidth available to the server at once.

Make sure that the server has a VERY beefy processor and lots of memory to handle this as well. I would look at something like a top end Ivy or Haswell i5 or even i7, if not Ivy Bridge-E (for all of the PCI-e lanes).
 
Last edited:
Thanks for the super detailed reply :)

The CPU in the server is a medium end Xeon, E5-2650 V2 8Core/16Thread 2.60GHz 20M LGA2011. The server only has 32GB of ram at the moment. Why would just sharing files to three workstations at 30gbe be CPU or memory intensive at all?

I think link aggregation/teaming is the way to go, in case I add another workstation or two in the near future I won't have to keep adding more cards to the server. It should make cabling much simpler too.

I have a cheap roll of shielded Cat6a but it's CCA (copper clad aluminum) wiring inside and I've been having some VERY strange problems with it. I wired the entire media room with active HDMI over "Cat5" adapters to connect all our different screens to our different sources. Nothing but problems, hdmi pixelation, dropping sync, etc. Even bought some regular Cat5e that turned out to be CCA and had the same issue! Took me a month to sort it out with the supplier of the HDMI>Cat5 adapters and he sent me some pre-made Cat5e cables that worked perfectly first try, that's when I found out all the ones I were trying were CCA.

So, I'm hesitant to wire with this CCA Cat6a :( If I go direct connect two work stations would be 120ft from the server and one is 200ft, so I'd have to use the Cat6a for at least one of them.

It hadn't occurred to me to put more than one 10gbe adapter in each workstation. I guess I will test the performance with two cards in one workstation vs. one and see if there is a measurable difference. The workstations have striped SSD's as well so they may benefit from having 20gbe connections.

If I do two cards per workstation I don't think an 8 port switch is going to cut it. Will have to rebudget and see if 20gps at each work station and maybe 40gbps to the server will work out.

Thanks again.

-Jamie M.
 
Yeah, I would avoid CCA like the plague. I have one run of CCA, but I needed some direct bury Cat6 for a short outdoor run from one room to another because it was impossible to run it inside, and CCA was the only way to go if I wanted the 40ft I need to be less than $70.

As for CPU/Memory, depending on how you are accessing files, SMB/CIFS or iSCSI (or something else), there is a fair amount of read/write caching done, or at least allowable by SMB/CIFS. I am not as on top of iSCSI. Now your SSD array should be screaming fast, but the RAM is going to be faster yet.

As for CPU...that is a LOT of data you are pushing and even the best of adapters generally only have very basic offloading. The SSDs are also going to soak up processor cycles.

I wouldn't be suprised if you didn't load up a few threads with 20-50% load saturating all of those adapters and the SSD array at the same time (supposing you were).

Something like a Celeron would be crushed under the I/O that the processor would have to handle (as it isn't direct from adapter to storage, CPU has to be involved). Probably an i3 would be crushed to. Heck, my i5-3570 might not be breathing easy either. I am speaking to 3 concurrent 10GbE connections under load plus the big old SSD array.

What you've got should handle it just fine though.
 

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Back
Top