What's new

CakeQOS Cake Status

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Yes. But all that LAN traffic might be impacting the routers performance when it is doing these tests.

It's a tiny bit of work. Look at the CPU usage
 
Hello,

Drops are bad. They are what cake is designed to stop happening.
For what it's worth, my understanding is that some dropped packets is a good thing and actually a way sender and receiver can determine the adequate throughput/speed. Cake is then about choosing the right packet to drop, and dropping early. For instance, it drops more packets in my lower priority tins.

I find the following very informative https://www.bufferbloat.net/projects/bloat/wiki/TechnicalIntro/

The problem is that the TCP congestion avoidance algorithm relies on packet drops to determine the bandwidth available. A TCP sender increases the rate at which it sends packets until packets start to drop, then decreases the rate. Ideally it speeds up and slows down until it finds an equilibrium equal to the speed of the link. However, for this to work well, the packet drops must occur in a timely manner, so that the sender can select a suitable rate. If a router on the path has a large buffer capacity, the packets can be queued for a long time waiting until the router can send them across a slow link to the ISP. No packets are dropped, so the TCP sender doesn’t receive information that it has exceeded the capacity of the bottleneck link. It doesn’t slow down until it has sent so much beyond the capacity of the link that the buffer fills and drops packets. At this point, the sender has far overestimated the speed of the link.


In a network router, packets are often queued before being transmitted. Packets are only dropped if the buffer is full. On older routers, buffers were fairly small so filled quickly and therefore packets began to drop shortly after the link became saturated, so the TCP/IP protocol could adjust. On newer routers buffers have become large enough to hold several megabytes of data, which can be equivalent to 10 sec. or more of data. This means that the TCP/IP protocol can’t adjust to the speed correctly, as it appears be able to send for 10 sec without receiving any feedback that packets are being dropped. This creates rapid speedups and slowdowns in transmissions.

Regards
W.
 
Hello,


For what it's worth, my understanding is that some dropped packets is a good thing and actually a way sender and receiver can determine the adequate throughput/speed. Cake is then about choosing the right packet to drop, and dropping early. For instance, it drops more packets in my lower priority tins.

I find the following very informative https://www.bufferbloat.net/projects/bloat/wiki/TechnicalIntro/



Regards
W.
Cake slows a sender by delaying acknowledgments and transmits. Drops are terrible as a drop requires the entire rotate window to be retransmitted once the timer runs out which is typically 2 seconds. Drops are what cause buffer bloat.
 
Cake slows a sender by delaying acknowledgments and transmits. Drops are terrible as a drop requires the entire rotate window to be retransmitted once the timer runs out which is typically 2 seconds. Drops are what cause buffer bloat.
I’ve not read this before. Is there a source for this behavior?
 
It was in the video by one of the authors I posted months ago
 
I still need to listen to it all. At around 5:33, in that Cake presentation, Jonathan Morton says

"So AQM keeps the queue length short by choosing when to mark packets, or drop them, in order to tell the endpoints that in fact the link is congested and that they should slow down a little bit..."


FWIW, I'm perfectly OK if experts devise an algorithm that wisely drops a few packets for the greater good of the link.

The alternative to dropping, when facing a temporarily congested link, is probably to keep packets in the queue for a while, which might ultimately induce more latency because it might mean less feedback to the endpoints.

Regards
W.
 
Maybe the video around this post https://www.snbforums.com/threads/cakeqos-merlin.64800/post-638424

Thank you Morris for your feedback. It is not how I had interpreted the quote above from the bufferbloat site. But the 2 seconds that you mention for the retransmit put things in perspective. I'll try to watch that video.
Yes, that's the one. Starts out simple explaining the principles and then gets down to the nitty gritty. I was a Systems Programmer, today's Software Engineer, in the 70's and 80's and one of my specialties was drivers and communications systems so I ate this up. That was before I began designing and implementing networks.

Morris
 
I still need to listen to it all. At around 5:33, in that Cake presentation, Jonathan Morton says

"So AQM keeps the queue length short by choosing when to mark packets, or drop them, in order to tell the endpoints that in fact the link is congested and that they should slow down a little bit..."


FWIW, I'm perfectly OK if experts devise an algorithm that wisely drops a few packets for the greater good of the link.

The alternative to dropping, when facing a temporarily congested link, is probably to keep packets in the queue for a while, which might ultimately induce more latency because it might mean less feedback to the endpoints.

Regards
W.
Note that is AQM, the way it works without SQM Cake. He leaves out another feather that I did as well and that's a TCP Source Squinch which tells a host to stop transmitting briefly. If you don't know TCP/IP look up "TCP Rotate Window" and "TCP Slow Start".

Morris
 
Hello again,

https://openwrt.org/docs/guide-user/network/traffic-shaping/sqm-details explains

"CAKE looks at each data flow in turn and either releases or drops a packet from each flow to match the shapers' schedule and by dropping packets from a flow before it has built up a significant queue is able to keep each flow under control."

and also

"This may seem like madness after all 'isn't packet loss bad?', well packet loss is a mechanism that TCP uses to determine whether it is sending too much data and overflowing a link capacity. So shooting the right packets at the right time is actually a fundamental signalling mechanism to avoid excessive queueing."


So, I still think Cake drops packet on purpose.
TCP Slow Start actually also relies a lot on dropped packets. (Morris, the video is not for you, but for whoever is passing by.)

I "scanned" the slides in the YouTube video of Morton and I didn't notice something about delaying ACKs. Not to say that there are no delayed ACKs (or not), but that smart packet dropping is a main feature/benefit of Cake.
 
The Open WRT recommendations for ECN defaults are interesting.

Why do they prefer to have hosts on the local network deal with drops rather than send them a source squinch (slow down)? I may be asking the wrong people.
 
With a score of B on my 100/100 fiber should I even bother with QoS? When I run it I get an A+ but can't tell any difference. And I'm not having any problems without it, is there something better about A+ that I'm missing?
 
Nice conversation. Thank you for your interest.

ECN looks cool. OpenWRT reminds "Note: this technique requires that the TCP stack on both sides enable ECN."
In 2017, it didn't seem to bring real life benefits. See input from RMerlin and john9527 below.

Do you know if things have changed in 2021 ? Have you configured it specifically on your equipment ?
EDIT: I just came across https://lists.bufferbloat.net/pipermail/cake/2020-December/005382.html, which raises my interst as Akamai is a big player AFAIK:
Not all servers have ECN support enabled. A SYN-ACK without the ECE bit set indicates it does not. The connection then proceeds as Not-ECT.

I'm reasonably sure Akamai has specifically enabled ECN support. A lot of smaller webservers are probably running with the default passive-mode ECN support as well (ie. will negotiate inbound but not initiate outbound).

- Jonathan Morton

For what it's worth, on my AC86U running Merlin 386.2 Beta 2, I got
Code:
# cat /proc/sys/net/ipv4/tcp_ecn
2
which supposedly means
2 – (default) enable ECN when requested by incoming connections, but do not request ECN on outgoing connections


2017 opinions about ECN:
Those congestion control protocols are mostly useless if they aren't used by both ends of the connection. That's why all those router hacks involving Vegas and such are more placebo than anything.
When I experimented with it, it either made no difference or made things slightly worse setting incoming+outgoing.

I didn't know about "source squinch" / ICMPv4 Source Quench messages I'm afraid. I found the following: http://www.tcpipguide.com/free/t_ICMPv4SourceQuenchMessages-3.htm:
"The lack of information communicated in Source Quench messages makes them a rather crude tool for managing congestion. In general terms, the process of regulating the sending of messages between two devices is called flow control, and is usually a function of the transport layer. The Transmission Control Protocol (TCP) actually has a flow control mechanism that is far superior to the use of ICMP Source Quench messages.


Another issue with Source Quench messages is that they can be abused. Transmission of these messages by a malicious user can cause a host to be slowed down when there is no valid reason. This security issue, combined with the superiority of the TCP method for flow control, has caused Source Quench messages to largely fall out of favor."​

Take care
Best regards
W.


EDIT: Some info about deployment status of ECN in 2021 below. I don't know how to interpret the numbers. Does the deployment state make it worthwhile to turn ECN completely ON ? For sure, ECN doen't seem to be "universally" deployed.
https://www.ietf.org/archive/id/dra...bservations-00.html#name-rfc3168-aqm-activity
 
Last edited:
With a score of B on my 100/100 fiber should I even bother with QoS? When I run it I get an A+ but can't tell any difference. And I'm not having any problems without it, is there something better about A+ that I'm missing?
Puzzled...
If you don't notice a real life difference, indeed why bother... Or maybe just for the fun of experimenting as long as it brings you joy/excitement.

Personally, if not noticing a difference, I would go for the settings that give me an A+ ;-) , or whatever settings are configured right now on your equipment.
Regards
W.
 
Puzzled...
If you don't notice a real life difference, indeed why bother... Or maybe just for the fun of experimenting as long as it brings you joy/excitement.

Personally, if not noticing a difference, I would go for the settings that give me an A+ ;-) , or whatever settings are configured right now on your equipment.
Regards
W.
I tried it just to try it. Just curious really, but when I saw A+ I became interested because of the "improvement".

Edit: But A+ means 75/75 vs. 100/100 also...
 
Last edited:
I have cake set with options very similar to the defaults in the new beta of merlin (ie: overhead+speed changes only). I've always set the speeds based on maximum saturation of the network and observing ping.

After reading this thread, I also noticed cake reporting dropped. Continuously reducing the speed only resulted in the throughput being appreciably adjusted. I noticed no changes in the dropped. Curiously, and without scientific testing, there didn't even appear to be an appreciable difference in the number of dropped, even though the throughput was considerably slower.

My understanding is that bufferbloat is essentially the saturation of a link by one or more sources, at the expense of the quality of other or all sources. So it makes sense in lay terms, that when cake notices a link has the potential to cause disruption, it takes means, including dropping packets in that link, to ensure the smooth overall running of the entire network.
 
Puzzled...
If you don't notice a real life difference, indeed why bother... Or maybe just for the fun of experimenting as long as it brings you joy/excitement.

Personally, if not noticing a difference, I would go for the settings that give me an A+ ;-) , or whatever settings are configured right now on your equipment.
Regards
W.
I would expect the difference to be for applications that are being slowed by cake, for example a download. Realtime applications are gong to find available buffer and will work fine.
 
  • Like
Reactions: Gar
I have cake set with options very similar to the defaults in the new beta of merlin (ie: overhead+speed changes only). I've always set the speeds based on maximum saturation of the network and observing ping.

After reading this thread, I also noticed cake reporting dropped. Continuously reducing the speed only resulted in the throughput being appreciably adjusted. I noticed no changes in the dropped. Curiously, and without scientific testing, there didn't even appear to be an appreciable difference in the number of dropped, even though the throughput was considerably slower.

My understanding is that bufferbloat is essentially the saturation of a link by one or more sources, at the expense of the quality of other or all sources. So it makes sense in lay terms, that when cake notices a link has the potential to cause disruption, it takes means, including dropping packets in that link, to ensure the smooth overall running of the entire network.
It's actually the interface queue running out of space. Here is a simple explanation of how cake works:
Almost all application flows fall into one of two categories:
Those that don't flood the interface with traffic. Examples include VOIP, Video, Typing on a terminal interface.
Applications that flood the interrace with data such as File Transfers, terminal output. These applications that flood the interface and up the transmit queue requiring all applications to wait to get on the queue.
Cake gives applications that don't flood the queue priority access to the queue and then allows the applications that flood the queue to transmit till they just about flood the queue.
The magic here and it's a brilliant observation by the developers is that real time applications are the ones that don't flood the queue and they get priority over the ones that do flood the queue.

If we boarded aircraft like this it would be much faster. Amazing that just the opposite is done. People with dyabilities, and young children go first. The buffer is immediately full!
 
Hi. I'm new to the AX86U. Cake shows up in my QOS choices, but the AX86U is not listed as a compatible from what I've seen. Is Cake compatible now with the AX86U?

Thanks,
AntonK
 
It's actually the interface queue running out of space.

And delaying a return response, or just simply dropping the packet, would only seem to impact the sender. In either case, the router is saying to the sender, "slow down, you're flooding me". In either case, the result equals less data trying to flood the interface.

If we boarded aircraft like this it would be much faster. Amazing that just the opposite is done. People with dyabilities, and young children go first. The buffer is immediately full!

I believe the general consensus is that people with disabilities, given that they have a condition which has an (real*) impact on their quality of life, are given some preference at times.

* vs an impatient person who thinks that waiting at a terminal for a few extra minutes, is suffering an impact on their quality of life.
 

Similar threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top