What's new

Odd Ethernet issue with auto-negotiating the link speed

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

B_Sisko

New Around Here
In my system I have a motherboard with two gigabit network adapters a Intel 82579V and Realtek 8111E, but only one of them seems to correctly negotiate the link speed. The Intel NIC has absolutely no difficulty setting the link speed to 1Gbps but the Realtek NIC takes about 10seconds trying to negotiate the speed and eventually settling on 100Mbps.

I am not sure if there is a problem with the NIC or the cabling here. The cable diagnostics on the Intel NIC show everything is fine. Just curious if this is something to worry about.
 
Just use the Intel NIC. You don’t want to connect 2 NICs from the same machine as it can create a storm in your switch. There are reasons to connect multiple NICs from the same machine but not just because it there.
 
I am not using them both at the same time. I didn't make that point clear I guess. I am trying to figure out if there an issue with the NIC or the cable run. It seems really strange for one NIC to work fine and another barely negotiate a link speed.
 
Just use the Intel NIC. You don’t want to connect 2 NICs from the same machine as it can create a storm in your switch. There are reasons to connect multiple NICs from the same machine but not just because it there.

Windows 8/8.1 and Server 2012 don't have this issue. They handle multiple NICs to the same switch without teaming, just fine.
 
So how does windows handle multiple NICs if the switch has no support for it?. Does it put one of the NICs in blocking mode like Spanning tree would?
 
yes/no. Windows multiplexes network transfers over multiple adapters now. So it nows which ones to use for what traffic and when. Multiple IP addresses and multiple MACs. If you are running

192.168.1.2 w/ 00:00:00:01
and
192.168.1.3 w/ 00:00:00:02

Windows will, for example, send out some packets to a server elsewhere on the network from NIC1 .2/:01, switch(es) send this all along to the server and the server knows the ARP for response back to the requesting computer should go all the way back to .2/:01 and the switches respond accordingly.

All sessions with this server the requesting computer will send over NIC1 with the .1/:01 IP/MAC.

In reverse if another computer was trying to request info from the computer with two NICs, it would pick either IP/MAC from it's ARP table because it knows either one will get to it, TCP/IP session established to one IP/MAC on the end computer and it'll stay with that instead of bouncing to the other NIC/IP/MAC.

Windows will load balance (to some degree) between all of the NICs with the latest network stack (SMB Multichannel). I can be running a large file download from the internet to my server (which as 2 NICs and Windows 8.1/SMB Multichannel), backing up pictures from my laptop to the server and streaming a couple of movies to my tablet and my apple TV and I can that the traffic is shared out between both NICs. Hit up a source/target that can handle the bandwidth and both NICs get loaded up to that single destination (IE my desktop which has 2 NICs and Windows 8.1 just like my server, I'll get 235MB/sec between the two on a file transfer, if my storage can handle it. Something LAG/Teaming CANNOT do).
 
I assume since 2 IP addresses are assigned to the 2 NICs no switch support is required. Very interesting. When you do a NSLOOKUP which IP address do you receive for the server? Is it random? First to respond?

At home it is hard to have enough hardware to saturate a gig connection. It can be done and I have done it moving hundreds of gigs but this is not a normal thing on my network.
 
I'll check on NSLOOKUP, I've never bothered too.

I am currently disk limited. I used to have a 2x1TB RAID0 array in my desktop and a 2x2TB RAID0 array in my server, but one of the 2TB drives started getting pending sector errors that were piling up, so I ditched both arrays and settled on a 3TB Seagate 7200rpm drive in each machine. I am planning to buy another pair in a few weeks so I can move to 2x3TB RAID0 arrays in both machines for performance and storage reasons.

With a single disk, but a reasonable amount of RAM caching (recent download, writing caching on reading machine, etc.) I can sometimes hit in the range of 210-230MB/sec on transfers of videos and such forth (anything "decent" sized. Except for pretty small files, anything under about 200MB transfers so quickly I don't even see the transfer window pop-up).

The worst I see is when I am purely disk bound on one side of the other, I only get around 155-160MB/sec (only have around 800GB free on the disks, so I am down to roughly the inner third of the disk for writes now).

Once I have a RAID0 array built again I should be purely network layer limited (other than very small files).

Transfers over about 4-5GB to the server run in to RAM caching issues and will drop down to disk I/O limits and transfers from the server to my desktop do the same at roughly the 10-11GB mark (yay for having 8GB of RAM on my server and 16GB on my desktop).
 

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top