What's new

DNS Caching Asus Stock / Merlin?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Just to see if dnsmasq's own cache isn't getting overwhelmed by having too many requests (it defaults to 1500 entries), see this post here on how to monitor cache recycling:

http://www.snbforums.com/threads/dns-servers.23828/#post-177346



Dnsmasq always runs as the DNS proxy and resolver on Asuswrt, it's not tied to parental control.

Hey Merlin just to follow up, I turned on DNS filtering in parental controls, using the "router" DNS (in my case, OpenDNS) and the queries dropped dramatically... from thousands, to hundreds. I am speculating that this setting somehow forced caching, or that without it the requests were being passed up stream? Not sure if that's expected behavior. I'll keep an eye on it but it's seemingly doing what I wanted in the first place.
 

Attachments

  • stats.jpg
    stats.jpg
    51.4 KB · Views: 741
Merlin i suggest adding a feature to adjust the size of cache on router. A lot of the routers that run your firmware have 256MB of ram. Unless by default they dont have a set size and will use all RAM if there are so many requests.
 
Merlin i suggest adding a feature to adjust the size of cache on router. A lot of the routers that run your firmware have 256MB of ram. Unless by default they dont have a set size and will use all RAM if there are so many requests.

Tests has shown that for the vast majority of persons, that cache never even gets filled. In fact, I failed to find a single person who was actually causing cache entries to get reused in a setup with the default value of 1500.

A larger cache does not always equal to better performance. A lot of users would blindly mess with that value, and wonder why they actually see a decrease in performance rather than an improvement.
 
Tests has shown that for the vast majority of persons, that cache never even gets filled. In fact, I failed to find a single person who was actually causing cache entries to get reused in a setup with the default value of 1500.

A larger cache does not always equal to better performance. A lot of users would blindly mess with that value, and wonder why they actually see a decrease in performance rather than an improvement.
Still considering forcing a default TTL to an hour. It seems shorter TTL times are only set for load balancing, so while that would defeat their load balancing it seems unlikely to cause any problems for me.

Anyone smarter than me see any problems I'm missing? I can't see any disadvantage.
 
I can't see any disadvantage.
Or to put it another way, what advantage would you get by going so?

Your latest stats show that your upstream DNS requests are now minimal. Reducing them any further isn't going to have any noticeable positive effect. Increasing the TTL from say 5 minutes to 1 hour might create outages or degradation of service for up to 1 hour.

It's not just load balancing. Another typical case would be when a web site owner moves from one hosting company to another. Before the switch they would change the TTL to something small like 20 seconds. After the switch they'd change it back again to something larger. That way they minimise the disruption to their customers.

I looks like you're trying to solve a problem that doesn't exist. :)
 
Well you're of course right that this is all academic; it wasn't a problem to begin with, just a learning experience.

In the example you gave, that site might be unavailable for what as much as half an hour? Seems like an edge case. Can always fluah the cache (reboot).

But overall day to day bandwidth would be saved. If that matters.
 
But overall day to day bandwidth would be saved. If that matters.
As you say, it's pretty academic as I'm sure that in 99.99% of cases it will make no difference.

Given that each DNS request is a few 10's of bytes I don't think bandwidth is an issue. Your average embedded web advert is probably 1000 times bigger!

Personally I think I'd like to feel confident that when a web site doesn't respond it's a problem with the site itself, rather than wondering "Is it the web site? Is it the cache? Should I wait an hour? Should I reboot the router? Will my wife hit me for interrupting her Netflix program?". :)
 
As you say, it's pretty academic as I'm sure that in 99.99% of case it will make no difference.

Given that each DNS request is a few 10's of bytes I don't think bandwidth is an issue. Your average embedded web advert is probably 1000 times bigger!

Personally I think I'd like to feel confident that when a web site doesn't respond it's a problem with the site itself, rather than wondering "Is it the web site? Is it the cache? Should I wait an hour? Should I reboot the router? Will my wife hit me for interrupting her Netflix program?". :)

In the case of load balancing, which I suspect us the majority of cases (like Netflix), it won't not work. In the very odd edge case where a site won't respond I can reboot the router if I even notice at all.
 
Still considering forcing a default TTL to an hour. It seems shorter TTL times are only set for load balancing, so while that would defeat their load balancing it seems unlikely to cause any problems for me.

Anyone smarter than me see any problems I'm missing? I can't see any disadvantage.

Enforcing a larger TTL shouldn't be a big problem, as long the new value is reasonable. It's not like you have a LAN of 500+ clients there, bypassing any kind of load balancing on the CDN's end ;)
 
I personally have my dnsmasq set to cache 10k, if its enough depends on the usage pattern really.

People who use browser addons such as request policy, no script or ublock in advanced mode will know that browsing one single website alone can trigger over a dozen dns lookups, due to all the tracking url's ads and multiple hostnames for static content common on modern sites. So just a few minutes of web browsing can easily fill a dns cache with 100s of records. When that is understood its not hard to realise a 1500 sized dns cache is not very big.

Another 2 possible reason's for regular repetitive lookups are.

1 - short TTL on dns records.
2 - dns queries that bypass a cache. e.g. firefox used to have this problem until angry server admin's filed a bug report to get it fixed.

Make sure the dns server on the lan client is set to the ip of the router.
 
on routerOS i set the DNS cache to 16MB (on the RB450G) and 32MB on my CCR1036. I found that if you have a lot of users (relative to a home) or heavy use it can use less than 8MB of cache. on business or enterprise you would need much larger cache. you can fit 2 to 3 entries per kilobyte.

Hijacking DNS requests really help reduce requests. The amount of traffic and bandwidth required to do a dns lookup is very little however the cache helps to speed things up significantly such as faster web browsing (even faster if you have a web cache), apps will run faster when they have to communicate over the net because they dont have to go far to perform a DNS lookup.

Make sure your DNS cache is bigger than the maximum that it would ever use. On a home network about 4-8MB is enough, for a network with its own local DNS you would want to add another MB or 2 for the static entries assuming you have up to a thousand. For larger networks you can make do with 16MB or even 20MB of cache. The DNS cache required for a network per user reduces the bigger it gets because a lot of people or devices will make the same queries often. Avoid letting your cache fill up to full.
 
Close enough that I think I can ask here:

So let's say "DNS filtering" proves effective and stops clients from side-stepping local DNS. What's the easiest way of figuring out what the offending clients are?
 
So let's say "DNS filtering" proves effective and stops clients from side-stepping local DNS. What's the easiest way of figuring out what the offending clients are?
I'd say, disable DNS Filtering and then add an iptables rule that logs all DNS requests that aren't coming from the router. Simples.
 

Similar threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top