What's new

Switch latency experiment (100 Mbit) using ICMP ping

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Jeroen1000

Regular Contributor
I just thought I'd share what I have found out which is probably well known but may help less seasoned people.

I've tested a:

- Mikrotik 250GS switch (gigabit) 5 port switch. This is a managed switch.
- A Cisco SLM-2008 (gigabit) 8 port switch. This is a managed switch
- Netgear Prosafe GS105 (gigabit) 5-port switch
- SMC FS8 (100 Mbit) 8 port switch
- Dlink DES-1005D (100 mbit) 5 port switch

Test equipment description:

- Intel gigabit NIC on laptop and Realtek gigabit NIC on desktop forced to 100 mbit full duplex. Honestly, the brand of NIC hardly matters unless the drivers are totally broken or if the NIC is just broken.
- Laptop running Windows 7, Desktop running Windows XP.
- All firewalls, Antivirus and security features OFF
- Tested under low system load (so besides pinging, the system should be idle)
- 2 short 60 cm CAT5E cables
- Ping tool used is Fping 3.00 for Windows see http://www.kwakkelflap.com/fping.html

Oddities:

- The first ping was always considerably higher (> 0.5 ms) than the average ping and also higher than the next maximum ping in line. It's a fluke caused by the Realtek NIC and I chose to ignore it:). If you are asking why I have ignored it; 0.5 ms is a lot considering my test scenario (see below).

- Windows ping.exe (which I have not used!) still rounds 1.9 ms to 1 ms. It only starts indicating 2.0 ms when Fping says 2.1 ms. How weird is that. :D

- Sometimes, probably caused by the systems, pinging with a fewer amount of bytes, yielded a higher ping. If such case arose, I redid the test twice just to make sure as otherwise it wouldn't make sense. Another "fluke".

Test Scenario:

  • Number of pings: 2 times 20. Then I average the minimum, maximum and average ping values. I pinged with

    - 32 bytes
    - 512 bytes
    - 1024 bytes
    - 1280 byes
    - 1472 bytes
    - 1500 bytes
  • I did a baseline run to get the absolutely lowest ping by directly connecting the laptop to the desktop (so without a switch in between)
  • For the ping tests with switches, both the laptop and desktop are connected to the switch (obviously) and well, that's the point:).

Result will follow and I do not have them in front me now.
 
Okay, I've uploaded the results in an Excel file as I could not get any layout going here.
As the byte size goes up, you can observe a 0,2 to 0,3 ms delay when communication goes via a switch as opposed to the 2 computers being directly connected. Calculating serialisation delay is easy:

Network speed in bits per second => 100 mbit/s = 100 000 000 bits per second
Convert the frame in bytes to bits by multiplying by 8. So, 1500 bytes = 12000 bits

Serialisation delay is then 12000 / 100 000 000 = 0,00012 seconds. This equals 0,12 milliseconds. You need to multiply this by two as the host you ping responds so it is not the one way delay but the round trip delay that is measured.

I'll update the file if someone wants me to test something specific. I'm testing my routers at the momement and I'm also thinking of doing a TCP-ping version.

Latest version can be found here
 
Last edited:
- The first ping was always considerably higher (> 0.5 ms) than the average ping and also higher than the next maximum ping in line. It's a fluke caused by the Realtek NIC and I chose to ignore it:). If you are asking why I have ignored it; 0.5 ms is a lot considering my test scenario (see below).

The first delay is caused by ARP (it queries the network to resolve the IP to a MAC address). You should find that, when running another test back to back, the delay disappears. The command to display the ARP table is arp -a.
 
I disagree (kindly, I'm not trying to make a fuss). But maybe we are talking a bit besides each other. So I'll prove my point by some simple math (yuk:p). Just calculate the serialisation (and this is best case) delay for 1500 bytes on a 100 megabit link. So, I get 0.12 ms for that. So that's the bare minimum one way. So for pinging between 2 computers that would amount to 0.24 ms as the answer is subjected to the same delay.

I did not have time to continue and share my results yesterday - as I 'had' to get up early to see the Venus passing the sun thing:)-)) - but the serialisation delay is about extra delay I witness with a switch in the path. For 9000 bytes (I could quick test that just for fun) this will creep up to about 0.7 ms.

So yes, I agree a good switch with not add much extra delay besides the serialisation delay. But no, it is certainly measurable. And that is just with 1 switch of course.

@Regarding the Zeptonics you link to, that is probably a cut-through switch and is totally in another class €/£/$ wise. Cool stuff though.
 
Last edited:
Agree... if the source device's wire speed is say, Gigabit and in the LAN path to the destination there's a slower switch, of course latency will suffer due to the lower wire speed!

This is a case of "Dr. it hurts when I do *this*".
Well, don't do *that*.
 
I don't know how to word it properly perhaps, but serialisation delay is dependent on the link speed and the size of PDU being sent (I.E. the amount of bytes).

Of course delay is lower on gigabit networks as the speed is linked to the delay.
The latency would still increase on gigabit networks as the amount of bytes that are sent, increase. I *think* we agree on this.

I was only refuting this statement you made:

A proper switch has a delay that wouldn't show in a ping timing test.

by trying to say. No, it does show as you can measure it.
 
yeah, we often talk about delay and latency in as if a store-and-forward scenario like bridges and routers.

An engineer friend who used to work for a company that made ethernet switches said that in those days there were two kinds of switches: cut-through and buffered. The former stored only enough of the protocol packet header to decide which port's ARP table matched then cut-over to pass the rest of the bits in serial. The buffered form stored the entire packet. I suppose that's almost a router. But there are these managed things called Layer 3 switches. I've always been confused by that term - seems like an contradiction.
 
Last edited:
A buffered switch is actually a store and forward switch (perhaps these terms are used as synonyms). The rest of your post is entirely correct as far as I know.

A cut-through switch is faster but historically had less features because it only reads just enough bytes to figure out where to switch the frame to (exactly as you say). Cut-through switches are making a come back it seems, as Cisco now has the Nexus switches and we have Arista and others that add features by doing some hybrid magic: they read just enough bytes to support a certain feature, but no more bytes than required. That would still make them faster than store and forward switches, but a bit slower than a true cut-through switch.

Also, you cannot use gigabit cut-through switches together with 100 megabit switches. They need to be the same speed as they cannot compensate for the timing difference.
However, I don't think I have an application that requires a cut-through switch when gigabit Ethernet is being used. Big financial traders do seem to require such ultra-fast switching.

When comparing to routers, the big difference is that a router has to change things like at least the time to live field (and others) and therefore the packet needs to be "routed" (hehe) to the CPU. In order to do that job, the router has to receive the entire packet before deciding what to do with it.
The CPU being inherently slower than a dedicated circuit for the job like in a switch (I.E. no wirespeed here:)).

A L3-switch generally does not do NAT (unless you really break the bank). There probably are enough models out here that can, but with Cisco the above seems to be true.
I'm not 100% sure on this, but L3-switches move the L3-processing to hardware (hardware based application specific integrated circuits(ASIC's)) and therefore route faster than a real router which uses CPU-cycles.
My post is already lengthy, but note there are many technologies around that both speed up a router and switch and thereby further blurring the waters:).
 
Last edited:
You might be thinking of fragment-free. That mode was rendered obsolete when it became feasible to replace all the hubs in your network with switches.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top