What's new

QoS - Traditional - user-defined priorities

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

MidKnight

Occasional Visitor
I'm looking into this current setup where I want 1 Nic that's located on a nas to have the lowest QoS settings, now as per photos attached, ive used the download bandwidth of Max 1mbs to test with and when I run a torrent that's bound to this Nic, the download speeds are maxing out the line. Is there something I've missed in the settings?
4e777f2d0ec88bf410face201dc2b097.jpg
5801af38380343f2684040abaa0f44b7.jpg
fd4d2a6d2b2deb4ae5a246a7f1154b0d.jpg
 
Realize that when sending data out a NIC there are two speeds, MAXbps and 0bps. QoS here is more of a prioritization scheme. Three devices want to send a packet out a NIC at the same, packet with highest QoS level will leave first.

Say a device with a low QoS level has multiple packets to send and begins sending. A device with a higher QoS setting has a packet to send will have to wait until the lower QoS device finishes sending the packet currently being sent.

More expensive/sophisticated routers can actually be configured to send data at a specific rate but that is only the effective rate. The actual rate is still full speed. It done with smoke and mirrors.

In reality, QoS has many methodologies, each with its own uniqueness and limitations.
 
Realize that when sending data out a NIC there are two speeds, MAXbps and 0bps. QoS here is more of a prioritization scheme. Three devices want to send a packet out a NIC at the same, packet with highest QoS level will leave first.

Say a device with a low QoS level has multiple packets to send and begins sending. A device with a higher QoS setting has a packet to send will have to wait until the lower QoS device finishes sending the packet currently being sent.

More expensive/sophisticated routers can actually be configured to send data at a specific rate but that is only the effective rate. The actual rate is still full speed. It done with smoke and mirrors.

In reality, QoS has many methodologies, each with its own uniqueness and limitations.

I don't think that is correct.

Packets go in a queue and enter a hierarchical token system. This system de-queues packets according to user set priority. Eg high priority packets are de-queued before low priorty packets.

Additionally, the token system is also configurable to dequeue packets at a set rate as to fit within user defined rates (eg bandwidth limit).

--

The underlying method used to achieve this is kind of smoke and mirrors but it should work and work well!

You are correct, packets initially are able to enter the download stream at your bandwidth rate but the QOS system works well to tame the speeds into user configurable rates.

Incomming bandwidth limiting is not an issue when TCP streams. The TCP protocol has the sending server negotiate transfer speed based on how long that server waited for a returned awk messages. TCP also has an ECN flag feature so it can slow down traffic even faster rather than evaluating awk delays. TCP also has a slow ramp to full bandwidth instead of saturating the link right away. This designed behavior fits will into how QOS is treating these packets behind the scenes as QOS will both increase these delays or drop packets so the sending server will in the end send at a rate that the receiving user desires.

UDP limiting is a different story.

Your explanation isn't entirely inaccurate but it sure could use some updating, since limiting should be working!!

--

The end result with QOS is that queued traffic will be enforced to meet user set limits by either being queued or dropped if the exceeds the queue length. The user should always receive packets at their desired rate, even if the actual link transfer rate is higher since the flagged packets should queued (held in buffers) or dropped instead of being pass through the user!
--

Something is not working right with the above QOS setup since that action is not happening.

Send the output of

Code:
 tc class show dev br0

It could be a UI glitch where you are setting "Minimum Reserved Bandwidth" instead of what is labeled as "Maximum Available Bandwidth"

(upload category has both settings, but the download category only has one setting)
 
Last edited:
Packets go in a queue and enter a hierarchical token system. This system de-queues packets according to user set priority. Eg high priority packets are de-queued before low priorty packets.

Additionally, the token system is also configurable to dequeue packets at a set rate as to fit within user defined rates (eg bandwidth limit).
A lot deeper explanation than I wanted to go, but none the less, once a packet is de-queued for transmission, the entire packet is sent, higher prioritized packets that came into the queue after this will wait. Another influencer is MTU size.
The user should always receive packets at their desired rate, even if the actual link transfer rate is higher since the flagged packets should queued (held in buffers) or dropped instead of being pass through the user!
With an oscilloscope on the Ethernet, packets will be coming and going at the rated speed of the Ethernet. Dequeuing packets at different rates depends on the router hardware/software sophistication. Measuring the packet speed after dequeuing is measuring the effective speed, not the real speed on the wire. Additionally, as you pointed out, there are many other TCP/UDP controls that could impact effective speed. One that comes to mind is Wiindowing.

With true CoS, packets are marked so that the receiving router and all the routers in between can prioritize the actual packet based on marking. I have not seen any router brand at this price point that marks packets. What we call CoS on the ASUS is just traffic prioritization using different techniques.

The query owner is concerned that the NIC packets are not going slow enough but there is no mention of other traffic at the time of measurement. Perhaps the setting will work if there was more traffic.
 
@ApexRon The question is... does it matter in benchmarks?

I've never has the pleasure of using equipment that has CoS ... - however if there's always "space" on the wire (created by the queuing / bucket logic and taking some bandwidth off the table), and you can always sent your VoIP packets with a very low delay, does it even matter that they don't have priority, ... if there's already "space" they don't even need to cut in line...

"Space" as I understand it is practically guaranteed on the upstream side of reserved bandwith whereas on the downstream it becomes a probability thing

Trying to understand this better from an overhead view
 
@cloneman
On the highway, speed can kill. On a data network, speed is a good thing. A couple of decades ago common carriers and ISP were paranoid about data errors because they would cause retransmissions adding more data traffic and slowing user response time. Then technology enhancements increased network speeds tremendously, so much so that retransmissions were no longer noticed by end users.

VoIP works best on high speed connections with true CoS.

For the ASUS product, benefits of CoS function may not be realized until times of congestion.
 
Anyway the ceiling not being respected means that something is not working right within the HTB structure.
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top