What's new

Adding some point to point 10GBE to an existing switched 1GBE Network

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

mervincm

Occasional Visitor
Home Lab
I currently have a bunch of systems connected to a single physical 16 port GBE switch. Two of these systems are ESXI Hosts. Both ESXi Hosts have VM's loaded, and they each have a single Vswitch that has that systems physical NIC added. In addition I have 1 Windows 2012 R2 server (multi-role storage) with 6x3TB HDD, and 1 Admin station with Windows 7 with 3x2TB HDD

It's all one flat single network, one subnet, no VLANS, no port trunking, no Jumbo frames, or anything special at all yet. Performance and functionally everything is fine with one exception. I want more than 1GBE of capacity when copying files between the Storage server/VM's/Admin station.

Port trunks etc. add value if I was talking about multiple simultaneous connections, but I want to improve performance in a single connection.

Looking at the prices of 10GBE switches, I hope to avoid that requirement.

coming across some inexpensive used single port 10GE broadcom NICS (Vsphere/Windows support) with optical transceivers, I picked up 6 of them.

I was hoping to add 2 of these NICS to each ESXi host, and add them to the existing vSwitch in each. This way each ESXi host would have a Vswitch with a few VMs, 1 physical 1GBE port connected to the main switch, 1 physical 10GBE port to go to the OTHER ESXi host, and 1 physical 10 GBE port to connect to storage or admin stations.

Is this going to work or am I out to lunch? I want to avoid VLANS / routing within my lan if at all possible, at least at first.

I see some potential looping issues, but not sre if this is possible.
 
Current 1Gig layout


Proposed added 10GBE piece
 

Attachments

  • lab-today.jpg
    lab-today.jpg
    31.5 KB · Views: 597
  • lab-new.jpg
    lab-new.jpg
    30.2 KB · Views: 883
I've never tried it that way, and I can't imagine that it will work as easily as it looks. I don't know if the ESX vSwitch will handle the traffic between say the admin-pc and the internet.
 
I know little about ESXi. That said, at least in windows, you can bridge connections as an option. I am sure you could in ESXi too. So, yes it should work fine with a little tinkering. Not sure with vSwitch how it'll handle it, but windows default has no issues with multiple non-trunked NICs connected to the same LAN. You are just limited to 1GbE of bandwidth between the various NICs, with the exception of SMB (for version 3.0 or later) which will handle more than 1GbE of gross bandwidth between the various cards with one or more session. I've played with it a bunch and I can get 2GbE of gross bandwidth between my desktop and server with 2 single port NICs connected to my switch, or if I unplug one port at the switch for my desktop, I can pull 1GbE of bandwidth to my desktop and 1GbE of bandwidth to my laptop from the server at the same time. There are limitations over port trunking (IE, non-SMB traffic is limited to 1GbE), but it works well.

Obviously you have the cards now and managed to get them cheap, so 10GbE full speed ahead! (jealous much, BTW). That said, since you are on 2012R2, you could be utilizing SMB multichannel and multiple GbE NICs...supposing you are using cifs/SMB and not something else at least. If you don't mind older cards, there are tons of cheap quad port GbE NICs floating around and that would get you 4 or 8 Gbps of connectivity between machines. You could do the same P2P links like you propose with the 10GbE (and a lot more cables), or if you want all of the bandwidth to not be limited by shared connections, a bigger switch and connect it all to the core switch.

I have a reduced version of that with my server and my desktop connected with a pair of single port Intel NICs to my core switch. I get 220+MB/sec of bandwidth between the machines easily and my RAID0 arrays (was more like 235MB/sec before, but my arrays are getting filled up and I need new disks).

I am currently looking at a couple of dual port NICs to replace my single port, and something newer (I am using Gigabit CT cards, which were EoL by Intel awhile ago, but still work great). That said, there are plenty of older dual and quad port cards out there a lot cheaper than what I am looking at...but I want all of the fancy pants new technologies, like RDMA offload and EEE.
 
I know little about ESXi. That said, at least in windows, you can bridge connections as an option. I am sure you could in ESXi too. So, yes it should work fine with a little tinkering. Not sure with vSwitch how it'll handle it, but windows default has no issues with multiple non-trunked NICs connected to the same LAN. You are just limited to 1GbE of bandwidth between the various NICs, with the exception of SMB (for version 3.0 or later) which will handle more than 1GbE of gross bandwidth between the various cards with one or more session. I've played with it a bunch and I can get 2GbE of gross bandwidth between my desktop and server with 2 single port NICs connected to my switch, or if I unplug one port at the switch for my desktop, I can pull 1GbE of bandwidth to my desktop and 1GbE of bandwidth to my laptop from the server at the same time. There are limitations over port trunking (IE, non-SMB traffic is limited to 1GbE), but it works well.

The part I was talking about is traffic going from one port on the vSwitch (that's an external physical port) and routing out another physical port on the same vSwitch. I've never tested it so I can't be sure either way... maybe I'll do that this week and see what it'll do.
 
Yes exactly. The problem I have is convincing VmWare to bridge physical adapters as part of a vSwitch.
 
Yes exactly. The problem I have is convincing VmWare to bridge physical adapters as part of a vSwitch.

I can't imagine it will want to work that way. Reason being is it could cause a switching loop and generally, those are bad. ;)

Whats your aversion to routing? :cool:
 
Whats your aversion to routing? :cool:

1) Desire to keep things simple. Routing means new IPs.
2) Concern with performance impacts, adding a virtual router (2 I suppose) could work, but would it not be a bottleneck at 10G?
3) not really sure how to do it honestly. Can I somehow configure ESXi to route between 2 VM? I have some experience with PFsense, and it can route, but it really seems like overkill for what I need. Maybe there is a virtual appliance router (hmm maybe a switch?)
 
In case anyone is searching to do something similar, I was able to verify that
-Built in VMware Vswitch can not bridge 2 Physical adapters
-optional Cisco vSwitch N1000V cannot either (had my Cisco rep Check)
-Was possible to do it very simply and reliably using a tiny VyOS VM (a Vyatta branch) with a few commands as long as you remembered to allow promiscuous mode on the VMware Switches.

eveything I needed to do in VyOS to create configure it as an ethernet bridge is right here

login vyos:vyos
configure
edit int eth eth0
set bridge-group bridge br0
exit
edit int eth eth1
set bridge-group bridge br0
exit
set system host-name vyossw1
set system domain-name xx.xxx.xxx
set system name-server 192.168.110.1
set system gateway-address 192.168.110.1
set int bridge 'br0'
set int bridge br0 address '192.168.110.19/24'
set service ssh port 22
commit
save


-in the end I bought a physical switch with 2 empty SFP+ cages, and 2 10GigE SFP+ to LC transceivers inexpensive enough to dump the idea entirely!
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top