What's new

[tools] Waveform Bufferbloat Test

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Yes, I can see using it at slow speeds. I could have used it back in the DSL years.
I guess you know it only comes into play when you saturate your 1 gig connection. So, they probably are not doing anything to help.

That's not what bufferbloat is about. Bufferbloat can occur on high bandwidth connections. Bufferbloat occurs because of the larger packets that appeared as bandwidth speeds increased and memory costs dropped. Yes, it can also be an issue if you saturate a connection, but even if it's not saturated, if poorly managed without a CAKE or FQ-CODEL (or similar algorithm) to fail packets quickly, you can get jitter or stuttering while on a VoIP or video call if there is enough traffic to fill the buffers (independent of bandwidth usage).

It is true that the TESTS saturate the connection, but only because that's a sure way to trigger bufferbloat visibility on connections where it can be a problem. It is NOT because bufferbloat only occurs if the pipe is full.
 
I used to play CSS on a 500kbps/50kbps connection with no issues. (circa 2005)

CS 1.6 was on dial up!!

from the site
Do your video or audio calls sometimes stutter? Does your web browsing slow down? Do video games lag? - NOPE

If so, bufferbloat may be to blame.
ya or 20 other reasons

View attachment 58253
Hmmmmmmmm low latency gaming........due to bufferbloat or due to ping from living in the middle of nowhere???

Absolutely NOT from ping due to being in the middle of nowhere. That 23ms ping is the ping based on the location. It's the +22ms (total of 45ms) ping that occurs due to poor flushing of packets, leading to an effective increase in latency under load. That flag on "Low Latency Gaming" under load just means that latency has increased beyond 40ms, which, at 45ms (23ms + 22ms) it has. At 45ms, that's not a disaster for many forms of gaming (fine for locally rendered games), but it would be bad for game streaming.

+22ms is not terrible. That's pretty good.

In some severe cases, users with high bandwidth pipes can see hundreds of ms increases when under load. And "under load" does not require the pipe to be saturated, just in use. This is because the delay trigger is not necessarily that the pipe is full, but that the buffers CAN (depends on various factors) delay packet exchanges.

This will ALSO occur without CAKE, FQ-CODEL, or similar algorithms in effect if the pipe is full, but not exclusively then. Further, with those algorithms managing traffic, even if the pipe is full, Internet use will still "feel" responsive. The available bandwidth will be reduced due to congestion, obviously, but latency should remain close to the unloaded latency. And latency, far more than bandwidth, is what gives a "feel" of responsiveness for most actions to a user.

Just to show what you can expect with proper bufferbloat prevention in place, note the under load increases of only 0ms and 2ms for download and upload respectively (the 30ms unloaded is because I'm far from the server, nothing I can do about that):
1724347068115.png


And potentially more important, depending on usage (especially for VoIP and video calling), are the jitter metrics that the Buferbloat test also shows (unloaded, under download load, under upload load respectively):
1724347217539.png


To your point, there are indeed places where bufferbloat doesn't matter much: for file transfers or video streaming. If that's all you care about, then you're correct that bufferbloat isn't important to try to manage. For those use cases, available bandwidth is king. But for VoIP, video calls, gaming (especially game streaming), and web browsing (assuming not a lot of huge images to download), consistent low latency with low jitter are more important.
 
Absolutely NOT from ping due to being in the middle of nowhere. That 23ms ping is the ping based on the location. It's the +22ms (total of 45ms) ping that occurs due to poor flushing of packets, leading to an effective increase in latency under load. That flag on "Low Latency Gaming" under load just means that latency has increased beyond 40ms, which, at 45ms (23ms + 22ms) it has. At 45ms, that's not a disaster for many forms of gaming (fine for locally rendered games), but it would be bad for game streaming.

+22ms is not terrible. That's pretty good.

In some severe cases, users with high bandwidth pipes can see hundreds of ms increases when under load. And "under load" does not require the pipe to be saturated, just in use. This is because the delay trigger is not necessarily that the pipe is full, but that the buffers CAN (depends on various factors) delay packet exchanges.

This will ALSO occur without CAKE, FQ-CODEL, or similar algorithms in effect if the pipe is full, but not exclusively then. Further, with those algorithms managing traffic, even if the pipe is full, Internet use will still "feel" responsive. The available bandwidth will be reduced due to congestion, obviously, but latency should remain close to the unloaded latency. And latency, far more than bandwidth, is what gives a "feel" of responsiveness for most actions to a user.

Just to show what you can expect with proper bufferbloat prevention in place, note the under load increases of only 0ms and 2ms for download and upload respectively (the 30ms unloaded is because I'm far from the server, nothing I can do about that):
View attachment 61088

And potentially more important, depending on usage (especially for VoIP and video calling), are the jitter metrics that the Buferbloat test also shows (unloaded, under download load, under upload load respectively):
View attachment 61089

To your point, there are indeed places where bufferbloat doesn't matter much: for file transfers or video streaming. If that's all you care about, then you're correct that bufferbloat isn't important to try to manage. For those use cases, available bandwidth is king. But for VoIP, video calls, gaming (especially game streaming), and web browsing (assuming not a lot of huge images to download), consistent low latency with low jitter are more important.
Check out this test I just did
 
Like the ISP applied QoS upstream. With the fake test above I'm getting A+ with simple bandwidth limiter.

Don't forget to purchase one of advertised best routers for mitigating bufferbloat at the bottom of the page.
 

Attachments

  • Screenshot 2024-08-22 at 8.50.01 PM.png
    Screenshot 2024-08-22 at 8.50.01 PM.png
    75.8 KB · Views: 30
That's not what bufferbloat is about. Bufferbloat can occur on high bandwidth connections. Bufferbloat occurs because of the larger packets that appeared as bandwidth speeds increased and memory costs dropped. Yes, it can also be an issue if you saturate a connection, but even if it's not saturated, if poorly managed without a CAKE or FQ-CODEL (or similar algorithm) to fail packets quickly, you can get jitter or stuttering while on a VoIP or video call if there is enough traffic to fill the buffers (independent of bandwidth usage).

It is true that the TESTS saturate the connection, but only because that's a sure way to trigger bufferbloat visibility on connections where it can be a problem. It is NOT because bufferbloat only occurs if the pipe is full.
I can kind of see this. Cisco uses voice vlans with priority so you don't have buffer problems in your network with normal traffic.
I still think the best solution is to buy a faster internet connection and not worry data flows. Voice traffic then you need to worry about it in your network.

PS
I don't think small home routers will have buffer issues other than internet bandwidth as the networks are too small without enough devices and switches.
 
Last edited:
I can kind of see this. Cisco uses voice vlans with priority so you don't have buffer problems in your network with normal traffic.
I still think the best solution is to buy a faster internet connection and not worry data flows. Voice traffic then you need to worry about it in your network.

@coxhaus, those do different things. Latency, bandwidth, and degradation of latency under load are three distinct functions of a connection. Any of them can be "good" or "bad" independent of the others. Yes, you can spend more for a higher bandwidth connection, but all else being equal, that would have zero effect on latency or change to latency when under load. Similarly, you could move so you're across the street from the DSLAM and test to a server in the same location and get 2ms latency, which would have no effect on bandwidth or delta to latency under load.

Bandwidth determines how much you can transfer per second, but has no bearing on how long it takes the first packet of that transfer to begin. E.g., we could have a terabyte laser connection to the moon, but latency would still be thousands of milliseconds based on distance and speed of light.

Latency (unloaded) is how long the round trip takes. This is a function of distance and switching times. Often, this can't be helped or hurt much by the end user, unless you happen to have access to multiple ISPs and some of those ISPs have excessive switches or route across long distance (I'm actually in this boat -- in NH, but my ISP routes all our traffic through Atlanta adding about 10-20ms of latency to all transactions).

Added latency under load (also called bufferbloat) is a function of the packet handling by the routers. Note that it doesn't necessarily take much load (does NOT require saturating the connection) to trigger this effect and increased latency. Also, this has nothing to do with either of the above metrics. It occurs when the routers between the two endpoints have to make decisions about packet handling and don't do it well. The source of this could be your local router or it could be a problem with the ISP and out of your control. If all the routers along the path flush buffers properly for proper packet handling, the delta for latency under load should remain at near zero. If not, this can mount to tens, hundreds, or in the worst cases over a thousand milliseconds, depending how bad the packet management. If this is a local issue with your home router, then adding CAKE or FQ-CODEL or other QOS management can fix this. But if it's a problem at any one of the routers managed by your ISP, there is nothing you can do about it directly (including paying more for a higher speed pipe). The only thing you can do is inform your ISP that they have one or more mismanaged router and hope that they do something about it.
 
Last edited:
@coxhaus, those do different things. Latency, bandwidth, and degradation of latency under load are three distinct functions of a connection. Any of them can be "good" or "bad" independent of the others. Yes, you can spend more for a higher bandwidth connection, but all else being equal, that would have zero effect on latency or change to latency when under load. Similarly, you could move so you're across the street from the DSLAM and test to a server in the same location and get 2ms latency, which would have no effect on bandwidth or delta to latency under load.

Bandwidth determines how much you can transfer per second, but has no bearing on how long it takes the first packet of that transfer to begin. E.g., we could have a terabyte laser connection to the moon, but latency would still be thousands of milliseconds based on distance and speed of light.

Latency (unloaded) is how long the round trip takes. This is a function of distance and switching times. Often, this can't be helped or hurt much by the end user, unless you happen to have access to multiple ISPs and some of those ISPs have excessive switches or route across long distance (I'm actually in this boat -- in NH, but my ISP routes all our traffic through Atlanta adding about 10-20ms of latency to all transactions).

Added latency under load (also called bufferbloat) is a function of the packet handling by the routers. Note that it doesn't necessarily take much load (does NOT require saturating the connection) to trigger this effect and increased latency. Also, this has nothing to do with either of the above metrics. It occurs when the routers between the two endpoints have to make decisions about packet handling and don't do it well. The source of this could be your local router or it could be a problem with the ISP and out of your control. If all the routers along the path flush buffers properly for proper packet handling, the delta for latency under load should remain at near zero. If not, this can mount to tens, hundreds, or in the worst cases over a thousand milliseconds, depending how bad the packet management. If this is a local issue with your home router, then adding CAKE or FQ-CODEL or other QOS management can fix this. But if it's a problem at any one of the routers managed by your ISP, there is nothing you can do about it directly (including paying more for a higher speed pipe). The only thing you can do is inform your ISP that they have one or more mismanaged router and hope that they do something about it.
Using cake is a one-sided operation as you have no control over the ISP which makes most QoS operations ineffective with today's internet speeds. I have worked on large enough networks to where we had to use QoS in networks back when communication lines were slow. Most home networks are not large enough to add much latency or need QoS.

The only QoS I have really thought I needed in the last 10 years was when I set up a small real estate network with 19 IP phones. They handle lots of images and are on the phones all the time. I used Cisco small business switches and set up a Voice VLAN for the IP phones.
 
Last edited:

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top