What's new

Without QoS enabled, how does my RT-AX86U allocate bandwidth?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Qbcd

Regular Contributor
Noob question here.

I have an Asus RT-AX86U router, very happy with it, there are about 10-15 clients at any time that the router is serving, all wireless except for 2 wired.

I haven't felt a need to enable QoS, even though my internet is not super fast (125 down/20 up). The router seems to do a pretty good job balancing load, even if I am maxing out the connection on my PC, others don't seem to have any issues. I've run two simultaneous speed tests on my phone and my computer and it seems to give them roughly equal bandwidth.

So my question is, without QoS, how does the router allocate bandwidth? Does it just split it evenly? Let's say I'm downloading something and maxing the connection at 125 mbps, then someone else wants to stream Netflix that needs 15 mbps - the sensible thing to do would be to give the streamer their full 15 mbps and give me the remaining 110 mbps, but does the router know to do that without QoS? Mine seems to be doing it, but I can't quite confirm how exactly it's handling allocation, hence my question.
 
First in, first out.


There's no prioritization (except maybe if there's COS applied at the switch level, I don't know if Broadcom/Asus enabled that by default).
 
First in, first out.


There's no prioritization (except maybe if there's COS applied at the switch level, I don't know if Broadcom/Asus enabled that by default).
But doesn't that imply that if I max out the connection then no one else can do anything? Right now without QoS, if I run a speed test on one device, and then 3 seconds later run a speed test on another device, the speed of the first one gradually drops off to about half and they end up sharing it roughly equally.
 
No, because the queueing is done per packet not by client connection.
Oh I see. So in practice what is the end result? If you have one client trying to download a video game at max speed, and you have 2 others trying to stream YouTube and 2 others trying to browse, then you have 4 clients requesting packets, but at very different rates, how will the split work out in the end.

And does it matter if one client is wired (say the one downloading the video game) and the others are wireless, since the wired client will have less latency, will he "beat" the others with the requests?
 
The end result is that the queuing is still done per packet.

That's what makes it work.
 
Oh I see. So in practice what is the end result? If you have one client trying to download a video game at max speed, and you have 2 others trying to stream YouTube and 2 others trying to browse, then you have 4 clients requesting packets, but at very different rates, how will the split work out in the end.
In a theorical scenario where all four are connected to a server that can push faster than your Internet connection and they have fairly similar latency, then they would each get a quarter of the bandwidth.
since the wired client will have less latency, will he "beat" the others with the requests?
It depends on the exact scenario. If dealing with TCP, they may since TCP requires the client to acknowledge the reception of each packet. If it's UDP (without packet acknowledgement), then the LAN latency shouldn't have any impact.

Ultimately, what this means is without QoS, everything is handled in a "best effort" manner, with nobody getting preferential treatement.
 
In a theorical scenario where all four are connected to a server that can push faster than your Internet connection and they have fairly similar latency, then they would each get a quarter of the bandwidth.

It depends on the exact scenario. If dealing with TCP, they may since TCP requires the client to acknowledge the reception of each packet. If it's UDP (without packet acknowledgement), then the LAN latency shouldn't have any impact.

Ultimately, what this means is without QoS, everything is handled in a "best effort" manner, with nobody getting preferential treatement.
I see, thanks. With so many variables, it's hard to conceptualize and hard to predict. But in my case it all seems to work, so I think I'll stick with no QoS, probably best as I've read introducing it can cause issues, and then troubleshooting those is hard again because of so many variables to consider.
 
In a theorical scenario where all four are connected to a server that can push faster than your Internet connection and they have fairly similar latency, then they would each get a quarter of the bandwidth.

It depends on the exact scenario. If dealing with TCP, they may since TCP requires the client to acknowledge the reception of each packet. If it's UDP (without packet acknowledgement), then the LAN latency shouldn't have any impact.

Ultimately, what this means is without QoS, everything is handled in a "best effort" manner, with nobody getting preferential treatement.
Does QoS slow down the max bandwidth provided only 1 computer for example is downloading a gigantic file?
 
Does QoS slow down the max bandwidth provided only 1 computer for example is downloading a gigantic file?
It shouldn't. That client will get throttled only if something with a higher priority needs bandwidth, in which case it will see some of the bandwidth reserved to the other queue.
 
It shouldn't. That client will get throttled only if something with a higher priority needs bandwidth, in which case it will see some of the bandwidth reserved to the other queue.
Presumably the total throughput would still be limited by the hardware capabilities of the router in question for the given QoS type? (I've lost track of the impact of the different QoS types.)
 

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top