What's new

How To Set Up Server NIC Teaming

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Dennis Wood

Senior Member
Doug, teaming and variations on the same are of course supported under XP and Vista, but are dependent on the NIC driver's support for it. I've yet to see support for two un-related NICS, only for cards, or motherboards with two or more ports. Asus has quite a few boards with 2, and some with 4 gigabit ports that support teaming. Getting a switch to work with both is a bit more complicated.

See my post here: http://forums.smallnetbuilder.com/showthread.php?t=427
 
NIC Teaming in XP and Vista

Thank you Dennis for your reply.

I agree, there does appear to be NIC Teaming solutions for XP and Vista with specific vendor cards and software such as Marvell. Thank you for the detail and providing the option on this forum!

Doug Reid
 
Linux bonding options

Hello Doug, all,

I feel like adding the bond linux module has different possible modes of operation; some may work, some may not, depending on your hardware, so I guess it is good to know there is room for tweaking.
On a debian linux 2.6.22 box I have the following /etc/modprobe.d/aliases-bond file, which I commented during my tryouts:
# Arp-based link failure detection, active backup. Works, not using it.
#options bond0 mode=active-backup arp_interval=2000 arp_ip_target=172.16.0.3
# Balancing mode based on powerconnects 802.3ad capability - 802.3ad requires mii.
# Powerconnect 2716 is *not* 802.3ad capable ?
#options bond0 mode=802.3ad miimon=1000 downdelay=5000 updelay=0
# Round-robin works ok. Long downdelay needed to quiet mii false alarms
options bond0 mode=balance-rr miimon=1000 downdelay=5000 updelay=0
I should insist this is absolutely hardware/kernel version-dependant, and I don't claim being an expert, so take this example with a grain of salt.

One thing I am interested in, miserably failed at a few months ago and have put on hold, is bridging a bonded interface. That's a useful configuration in dual-nic servers running bridged VMs or for OpenVPN, for example. If you'd like to investigate, I'm all ears :)
 
Dennis,
Windows-based teaming with a switch is certainly possible. I have a rack full of Windows servers that use multiple NICs and are teamed using only the NIC driver software. Most are HP NICs but I've also done it with Intel NICs. I then run both into a switch without any teaming or aggregation on the switch and it works fine.
The server itself sees the team as a single IP address so you get bonding when they both work and fault tolerance when one fails.

Peter
 
The fact that Linux supports this at all is very cool, particularly with disparate hardware. Teaming on a server, particularly with a fast RAID array, is one cheap way to dramatically improve client performance.

I guess after all these years messing with Microsoft's server and workstation variants, I'm going to have to get my Linux jockey jersey in order :)
 
Bonding and Jumbo Frames

Hi,

Using the initiatives of this great site, I have successfully got jumbo frames to work. (still to test speed implications though). Also, I have managed to get bonding to work on a dual PCIe mobo.

Any ideas on getting both to work at the same time on startup (ie. additional syntax in the etc/network file?).

Thanks

edit: my go at it just gave my new baby to near death experience.
 
Last edited:
Peter, what I've seen on the switch end (where it doesn't support LACL) is that teaming works, but only on the sending side of data. In other words, if two single port NIC workstations request data at the same time, they each get a "pipe" from a teamed server. However if a teamed NIC workstation requests data from teamed server (and again, a "dumb" switch) then only one port is used...even though there are two ports available. Once you use a 802.3ad switch with LACL trunks, then the switch can dynamically alter the traffic on the ports. This means that a teamed workstation, properly configured to an LACL trunk on the switch, can increase it's bandwidth from the server by using both ports dynamically during a request.

So with two workstation that were hammering a server you could for example use a 4 port teamed NIC on the server, and dual LAN Nics on the workstations. With an 802.3ad switch, and 3 trunks configured, each workstation could potentially have their own two ports increasing bandwidth to 2000Mbps each on downstream requests. With a non 802.3ad switch, each workstation would only get 1000Mbps regardless of cabling, or driver configurations.

I make no claim to expertise in the area but my observations are based on a review of the 803.3ad spec, and our own extensive testing on three switches, two dual LAN workstation, and a load balanced (2 LAN ports) TS509 NAS. We've tested a "dumb" switch, managed switch with manual trunking, and now a 3com switch with full 802.3ad support. At this point, I'd only ever recommend a switch with full 802.3ad where the goal was a cheap, but significant increase in both server and workstation performance.

One of the important differences we've observed is that not all dual NIC teaming is handled the same by the driver. For example, Nvidia's 680i chipset teaming is strictly configured at the driver side with no options...just on or off. Configure link aggregation at the switch, either manual or automatic (LACL) and you lose LAN connectivity. The Marvel Yukon dual LAN driver however allows three modes to be set at the driver level. "Basic" for use with dumb switches, "Static" for use with manually configured trunking at the switch, and "Dynamic" for use with 802.3ad fully compliant switches. It is when using dynamic mode on an 802.3ad compliant switch that we've seen significant performance increases to the workstation.
 
Last edited:
So with two workstation that were hammering a server you could for example use a 4 port teamed NIC on the server, and dual LAN Nics on the workstations. With an 802.3ad switch, and 3 trunks configured, each workstation could potentially have their own two ports increasing bandwidth to 2000Mbps each on downstream requests.

Are you claiming that a workstation can get > 1 Gb/s over a single file transfer? That would contradict my understanding of the standard, and I'd love to see that substantiated in detail.
 
Netgear GS108T

Has anyone looked at the Netgear GS108T (not the GS108) - seems to have all the features at a sub $100 price point. Also supports jumbo frames which was on my checklist. 8 ports.

http://www.netgear.com/Products/Switches/AdvancedSmartSwitches/GS108T.aspx?detail=Specifications

  • IEEE802.3ad Link Aggregation (manual or LACP)

So, it looks like a Netgear GS108T teamed with a Marvel Yukon board set to dynamic should do the trick of full 802.3ad Gigabit link aggregation under windows. Thoughts?

-db
 
Last edited:
Yes, we have this combination working exactly. The benefit to the workstation though is minimal...it's the NAS that will benefit from link aggregation where it's being accessed by several workstations at the same time. The GS108T is a bit wierd in terms of it's interface, but 802.3ad is fully supported.

Madwand, I'm beginning to think that some of the results we tested with the TS509 and LACP were in fact skewed by older driver/firmware on the TS509, since replaced. When looking at the different switches we've tested, and examining the port stats on LACP enabled connections, we've seen different behavior on the ports with regard to RX and TX traffic. On older QNAP TS509 firmware (likely the 1st or 2nd release) the NAS unit would set up one cable as RX and the other as TX and we did see increased performance connected this way. Since then, we've only seen LACP increase the read/write capacity of the NAS during multiple loads providing its disk IO could keep up. So far, it's only been on the reads on the TS509 and the 6 drive READYNAS PRO where LACP increased aggregate throughput beyond 120MB/s

Cheers,
Dennis.
 
Last edited:
Yes, we have this combination working exactly. The benefit to the workstation though is minimal.

Excellent news - thanks. I am a bit confused though about the minimal workstation benefit. For example, local SATA II drives connected to a workstation have a bandwidth of 3Gbit/sec, but will be performance limited by the read speed of the drive unless info is coming from the drive buffer. So, realistically with today's 7200RPM SATA II drives you are looking at about 110MB/sec for data located on the drive's outer zone. In this scenario, local storage could be faster than a 1Gbit/sec ReadyNAS Pro connection.

My expectation was that given all of the performance data on the ReadyNAS Pro, if I had a bonded connection from the workstation to the ReadyNAS, providing 2Gbit/sec of bandwidth (of which I am conservatively estimating at 150MB/sec of recognizable throughput), that I would see about a minimum 36% lift vs local storage (more if the data on the local drive is located on the inner zone of the drive). Is there something I am failing to consider?

Thanks,

-db

p.s. - The 88E8053 chipset on the P5W supports jumbo frames, right?
 
Yes, the P5W supports jumbo frames but we don't use them. In a mixed network environment with XP PRO, several NAS units and Vista32/64 bit strange things happen with a few shares when we enable jumbo frames..and that's with all NICs matched. I gave up on jumbo frames in that environment. In a smaller network, I'd give them a go though. What we've found is the SMB2 performance of Vista SP1 (later investigated by Tim here) is the single largest factor in performance, assuming your hard drive speeds are up there over 120MB/s sustained and you're using a PCIe gigabit interface.

In just about every case, you'll find a 7200RPM eSATA drive is actually returning a real world 60MB/s sustained with burst rates much higher. Tim has tested the living daylights out of several single drives here and you'll find plenty of posts on the subject. We don't care about burst rates though as our typical HD video files are 1 to 5GB in size, and our typical RAW image files are over 21MB in size each. That's why all of our workstations used for testing (and editing normally) use a 3 or 4 drive RAID 0 array..using the good old on board Intel, or Nvidia controller which works great for RAID 0. We're seriously pushing the current editing boxes so I've decided to configure a new quad-core processing box, 64bit OS, and 16GB of RAM so we'll be testing this new workstation on the NAS units.

The other issue is that even if you team to the workstation, the NAS and switch are not typically saturating the two LAN pipe. The way to tell is to configure teaming, reset port stats on your switch then transfer a file from the NAS to the workstation. Check TX and RX packet port stats on the teamed ports right after (or during) the file transfer and you'll likely find they are not balanced in a single workstation transfer. Now hang two workstations off the NAS, both loading the NAS, and teaming makes a very large difference in performance, providing the NAS is capable of internal disk IO performance to match. In other words if the NAS disk IO can only handle 50MB/s write speeds, then two workstations loading it will typically drop to 25MB/s each (assuming all is well with firmware!!). That's well below the 110 MB/s effective LAN speed of one gigabit port so having two ports available will do zero to improve performance regardless. The TS509 in teamed mode seems happy providing read speeds of 66MB/s to two hungry workstations simultaneously so for reads, the teamed NAS connection will increase performance to each workstation.

The point of the long winded reply is that we've found that workstations don't benefit from teaming unless they're serving files. Also, if you've got a single drive in a workstation, don't be surprised if you're sustained gigabit transfer rates reach a max of 60 to 70MB/s regardless of teaming/jumbo frames or otherwise. Drives are cheap and onboard RAID0 works great. This however does not apply to onboard RAID5 performance ... in our tests it's typically terrible.
 
Last edited:
Not sure if I should create a new thread but since it's about NIC teaming I'll ask here first.

I have a Dell Powerconnect 2724 24-port switch and I have configured two ports to LAG group 1 for my server.

My server has 2x Intel gigabit ports and I've configured them to be a team (IEEE 802.3ad Dynamic Link Aggregation).

To test, I do a ping -t to my server and unplug one of the network cables from the server. The ping fails once and switches over to the other Intel gigabit port and continues pinging with no issues.

However, once I plug the network cable back in, I can't ping or remote back into my server. Is this normal? I would think plugging back the network cable would just reconnect the network adapter and rejoin the network team?
If I unplug either network cable (having only one cable plugged in) on the server I regain the connection.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top