What's new

Unbound - Authoritative Recursive Caching DNS Server

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Status
Not open for further replies.
Run once more, cache. One more time - recursion again, run again
Working normal. In FW Merlin terminal must add unbound port
Code:
@rgnldo:/tmp/root# dig @127.0.0.1 -p 53535 github.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 github.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64491
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;github.com.                    IN      A

;; ANSWER SECTION:
github.com.             900     IN      A       18.228.52.138

;; Query time: 243 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 12:56:20 BRT 2020
;; MSG SIZE  rcvd: 55

@rgnldo:/tmp/root# dig @127.0.0.1 -p 53535 github.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 github.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17294
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;github.com.                    IN      A

;; ANSWER SECTION:
github.com.             898     IN      A       18.228.52.138

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 12:56:22 BRT 2020
;; MSG SIZE  rcvd: 55
Here is my top result if it is of help:
It is using two thread instances. Libevent will be in charge of distributing.
Your tests should be done on a LAN client.
 
dVE9Wq1.jpg
 
That is fantastic! Running Unbound or Unbound + DoT? Also try the DNS spoofability test. For me, some of the DoT resolvers got poor as a result previously, whenever I only run Unbound I get Excellent.
 
Working normal. In FW Merlin terminal must add unbound port
Code:
@rgnldo:/tmp/root# dig @127.0.0.1 -p 53535 github.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 github.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64491
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;github.com.                    IN      A

;; ANSWER SECTION:
github.com.             900     IN      A       18.228.52.138

;; Query time: 243 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 12:56:20 BRT 2020
;; MSG SIZE  rcvd: 55

@rgnldo:/tmp/root# dig @127.0.0.1 -p 53535 github.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 github.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17294
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;github.com.                    IN      A

;; ANSWER SECTION:
github.com.             898     IN      A       18.228.52.138

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 12:56:22 BRT 2020
;; MSG SIZE  rcvd: 55

It is using two thread instances. Libevent will be in charge of distributing.
Your tests should be done on a LAN client.
On my local client, the same thing does happen - If I run dig 3 times I get another recursive response. 4 times I hit cache each time.

EDIT: It seems random, sometimes I will get a second recursive response (but no more than twice total) - no exact number to trigger, but I am guessing that is expected behavior for load balance? Just odd that the cache isn't synced.
 
Last edited:
On my local client, the same thing does happen - I have to run dig 3 times to hit cache. Is the cache split between threads? Or is something amiss with my setup?
Multi thread without stubby. Working fine on my terminal Macbook Pro.

Code:
MBP@rgnldo ~ % dig github.com

; <<>> DiG 9.10.6 <<>> github.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 63050
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;github.com.            IN    A

;; ANSWER SECTION:
github.com.        900    IN    A    18.231.5.6

;; Query time: 224 msec
;; SERVER: 2804:4474:206:6300::1#53(2804:4474:206:6300::1)
;; WHEN: Tue Jan 07 15:27:40 -03 2020
;; MSG SIZE  rcvd: 55

MBP@rgnldo  ~ % dig github.com

; <<>> DiG 9.10.6 <<>> github.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1457
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;github.com.            IN    A

;; ANSWER SECTION:
github.com.        898    IN    A    18.231.5.6

;; Query time: 1 msec
;; SERVER: 2804:4474:206:6300::1#53(2804:4474:206:6300::1)
;; WHEN: Tue Jan 07 15:27:42 -03 2020
;; MSG SIZE  rcvd: 55
 
Multi thread without stubby. Working fine on my terminal Macbook Pro.

Code:
MBP@rgnldo ~ % dig github.com

; <<>> DiG 9.10.6 <<>> github.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 63050
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;github.com.            IN    A

;; ANSWER SECTION:
github.com.        900    IN    A    18.231.5.6

;; Query time: 224 msec
;; SERVER: 2804:4474:206:6300::1#53(2804:4474:206:6300::1)
;; WHEN: Tue Jan 07 15:27:40 -03 2020
;; MSG SIZE  rcvd: 55

MBP@rgnldo  ~ % dig github.com

; <<>> DiG 9.10.6 <<>> github.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1457
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;github.com.            IN    A

;; ANSWER SECTION:
github.com.        898    IN    A    18.231.5.6

;; Query time: 1 msec
;; SERVER: 2804:4474:206:6300::1#53(2804:4474:206:6300::1)
;; WHEN: Tue Jan 07 15:27:42 -03 2020
;; MSG SIZE  rcvd: 55
Huh. I might need to start from scratch then! Thanks for checking this out.

I think its when I (nearly) spam the dig command using arrow up, enter. If it is right after my last command (same domain), it gives new response one time. After that, I can spam it all day getting ~1ms.
 
After that, I can spam it all day getting ~1ms.
Completely uninstall unbound with the script-installer. Reboot and install again. As I depend on haveged, stubby ... usually restarts all services together. /opt/etc/init.d/rc.unslung restart
 
@SolluxCaptor Uninstall diversion and disable AiProtection to verify.
Just completed these steps (uninstalled Unbound - reboot, uninstall Diversion, disable AI protect, reinstall Unbound, change thread settings, reboot, run dig). I unfortunately get the same result (also tried on LAN client after):
Code:
dig @127.0.0.1 -p 53535 google.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57272
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com. IN A

;; ANSWER SECTION:
google.com. 900 IN A 64.233.177.102
google.com. 900 IN A 64.233.177.101
google.com. 900 IN A 64.233.177.100
google.com. 900 IN A 64.233.177.139
google.com. 900 IN A 64.233.177.138
google.com. 900 IN A 64.233.177.113

;; Query time: 27 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 14:01:08 EST 2020
;; MSG SIZE rcvd: 135

andrew@RT-AX88U-48A8:/tmp/home/root# dig @127.0.0.1 -p 53535 google.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29157
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com. IN A

;; ANSWER SECTION:
google.com. 900 IN A 64.233.177.138
google.com. 900 IN A 64.233.177.113
google.com. 900 IN A 64.233.177.139
google.com. 900 IN A 64.233.177.102
google.com. 900 IN A 64.233.177.101
google.com. 900 IN A 64.233.177.100

;; Query time: 22 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 14:01:09 EST 2020
;; MSG SIZE rcvd: 135

andrew@RT-AX88U-48A8:/tmp/home/root# dig @127.0.0.1 -p 53535 google.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27081
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com. IN A

;; ANSWER SECTION:
google.com. 897 IN A 64.233.177.102
google.com. 897 IN A 64.233.177.101
google.com. 897 IN A 64.233.177.100
google.com. 897 IN A 64.233.177.139
google.com. 897 IN A 64.233.177.138
google.com. 897 IN A 64.233.177.113

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 14:01:11 EST 2020
;; MSG SIZE rcvd: 135

andrew@RT-AX88U-48A8:/tmp/home/root# dig @127.0.0.1 -p 53535 google.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43818
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com. IN A

;; ANSWER SECTION:
google.com. 897 IN A 64.233.177.138
google.com. 897 IN A 64.233.177.113
google.com. 897 IN A 64.233.177.139
google.com. 897 IN A 64.233.177.102
google.com. 897 IN A 64.233.177.101
google.com. 897 IN A 64.233.177.100

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 14:01:12 EST 2020
;; MSG SIZE rcvd: 135

andrew@RT-AX88U-48A8:/tmp/home/root# dig @127.0.0.1 -p 53535 google.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45644
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com. IN A

;; ANSWER SECTION:
google.com. 896 IN A 64.233.177.102
google.com. 896 IN A 64.233.177.101
google.com. 896 IN A 64.233.177.100
google.com. 896 IN A 64.233.177.139
google.com. 896 IN A 64.233.177.138
google.com. 896 IN A 64.233.177.113

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 14:01:12 EST 2020
;; MSG SIZE rcvd: 135
Going back to 1 thread removes this anomaly. Libevent2 is installed and was on install of Unbound
 
Last edited:
Going back to 1 thread removes this anomaly.
Weird. Working fine here. On terminal FW Merlin
Code:
@rgnldo:/tmp/home/root# dig @127.0.0.1 -p 53535 google.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64719
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com.                    IN      A

;; ANSWER SECTION:
google.com.             900     IN      A       216.58.202.174

;; Query time: 606 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 16:12:25 BRT 2020
;; MSG SIZE  rcvd: 55

@rgnldo:/tmp/home/root# dig @127.0.0.1 -p 53535 google.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26998
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com.                    IN      A

;; ANSWER SECTION:
google.com.             898     IN      A       216.58.202.174

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 16:12:27 BRT 2020
;; MSG SIZE  rcvd: 55

Code:
# no threads and no memory slabs for threads
num-threads: 2
msg-cache-slabs: 4
rrset-cache-slabs: 4
infra-cache-slabs: 4
key-cache-slabs: 4

so-reuseport: yes
outgoing-range: 4096
num-queries-per-thread: 1024

# tiny memory cache
key-cache-size: 16m
msg-cache-size: 8m
rrset-cache-size: 8m
 
Leave it for this mode. Over time you get the trick. Take it easy.
Yeah I may have to. For me dig looks like yours, but if I run it a couple more times I get a new recursive answer. Then - all cache until expiry. Super weird, same config you have above also.
 
Weird. Working fine here. On terminal FW Merlin
Code:
@rgnldo:/tmp/home/root# dig @127.0.0.1 -p 53535 google.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64719
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com.                    IN      A

;; ANSWER SECTION:
google.com.             900     IN      A       216.58.202.174

;; Query time: 606 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 16:12:25 BRT 2020
;; MSG SIZE  rcvd: 55

@rgnldo:/tmp/home/root# dig @127.0.0.1 -p 53535 google.com

; <<>> DiG 9.14.4 <<>> @127.0.0.1 -p 53535 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26998
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com.                    IN      A

;; ANSWER SECTION:
google.com.             898     IN      A       216.58.202.174

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53535(127.0.0.1)
;; WHEN: Tue Jan 07 16:12:27 BRT 2020
;; MSG SIZE  rcvd: 55

Code:
# no threads and no memory slabs for threads
num-threads: 2
msg-cache-slabs: 4
rrset-cache-slabs: 4
infra-cache-slabs: 4
key-cache-slabs: 4

so-reuseport: yes
outgoing-range: 4096
num-queries-per-thread: 1024

# tiny memory cache
key-cache-size: 16m
msg-cache-size: 8m
rrset-cache-size: 8m
Rrset cache is preferred to be equal in size to key cache so it is better to be 16m and 16m
 
Rrset cache is preferred to be equal in size to key cache so it is better to be 16m and 16m
AH! This fixed it for me when running 2 threads (so far). I was wondering about that here in Unbound doc:
Code:
    # more cache memory, rrset=msg*2
    rrset-cache-size: 100m
   msg-cache-size: 50m
And also this part:
Code:
  # more outgoing connections
    # depends on number of cores: 1024/cores - 50
    outgoing-range: 950
 
Last edited:
AH! This fixed it for me when running 2 threads (so far). I was wondering about that here in Unbound doc:
Code:
    # more cache memory, rrset=msg*2
    rrset-cache-size: 100m
   msg-cache-size: 50m
And also this part:
Code:
  # more outgoing connections
    # depends on number of cores: 1024/cores - 50
    outgoing-range: 950
Keep an eye on actual cache size after adding "extended-statistics: yes" to unbound.conf.
Code:
unbound-control stats_noreset | grep "^mem\.cache\."
Values reported are in bytes. I've removed all the custom cache-size, slabs and threads config and running with defaults to see how the performance goes. So far my caches are small, but I've been bouncing around a lot between unbound and nextdns.
Code:
mem.cache.rrset=541352
mem.cache.message=155800
 
Pushed v1.15 Hotfix/Warning

I upgraded to v384.15_Alpha o_O

Code:
(unbound_installer): 14513 Starting Script Execution (menu)

RT-AC68U S02haveged: Starting Haveged entropy /opt/etc/init.d/S02haveged
RT-AC68U haveged: haveged: Stopping due to signal 15
RT-AC68U haveged: haveged starting up
RT-AC68U admin: Started haveged from .
RT-AC68U haveged: haveged: ver: 1.9.5; arch: generic; vend: ; build: (gcc 7.4.0 CV); collect: 128K
RT-AC68U haveged: haveged: cpu: (); data: 32K (P); inst: 32K (P); idx: 18/40; sz: 31104/71972
RT-AC68U haveged: haveged: fills: 0, generated: 0
RT-AC68U S61unbound: Starting Unbound DNS server /opt/etc/init.d/S61unbound
RT-AC68U kernel: warning: process `unbound' used the deprecated sysctl system call with 1.40.6.
RT-AC68U admin: Started unbound from .
RT-AC68U rc_service: service 14975:notify_rc restart_dnsmasq
RT-AC68U custom_script: Running /jffs/scripts/service-event (args: restart dnsmasq)
RT-AC68U (service-event): 14977 Script not defined for service event: restart-dnsmasq
RT-AC68U custom_config: Appending content of /jffs/configs/hosts.add.
RT-AC68U custom_config: Appending content of /jffs/configs/dnsmasq.conf.add.
RT-AC68U custom_script: Running /jffs/scripts/dnsmasq.postconf (args: /etc/dnsmasq.conf)
RT-AC68U (dnsmasq.postconf): 15012 Starting ..... [/etc/dnsmasq.conf]
RT-AC68U Diversion: restarted Dnsmasq to apply settings, from /jffs/scripts/dnsmasq.postconf
RT-AC68U custom_script: Running /jffs/scripts/stubby.postconf (args: /etc/stubby/stubby.yml)
RT-AC68U (stubby.postconf): 15166 Quad 9 setting idle_timeout 2000
RT-AC68U stubby[15201]: Read config from file /etc/stubby/stubby.yml

RT-AC68U dnsmasq[15203]: cannot reduce cache size from default when DNSSEC enabled
RT-AC68U dnsmasq[15203]: FAILED to start up

RT-AC68U custom_script: Running /jffs/scripts/service-event-end (args: restart dnsmasq)
RT-AC68U (service-event-end): 15228 Script not defined for service event: restart-dnsmasq-end
RT-AC68U rc_service: watchdog 453:notify_rc start_dnsmasq

I now issue a warning, and skip the 'dnsmasq.postconf' set 'cache-size=0' modification.

Not 100% sure if this is acceptable?
 
Pushed v1.15 Hotfix/Warning

I upgraded to v384.15_Alpha o_O

I now issue a warning, and skip the 'dnsmasq.postconf' set 'cache-size=0' modification.

Not 100% sure if this is acceptable?
My current position is that to keep the Unbound cache fresh, you don't want dnsmasq caching in front of it, so I delete the dnssec options to allow zero cache. Since there is no one else to trust in between dnsmasq and unbound, I am fine to proxy-dnssec the Unbound responses.
 
Keep an eye on actual cache size after adding "extended-statistics: yes" to unbound.conf.
Code:
unbound-control stats_noreset | grep "^mem\.cache\."
Values reported are in bytes. I've removed all the custom cache-size, slabs and threads config and running with defaults to see how the performance goes. So far my caches are small, but I've been bouncing around a lot between unbound and nextdns.
Code:
mem.cache.rrset=541352
mem.cache.message=155800
Very good. We all thank you for your cooperation.
 
Keep an eye on actual cache size after adding "extended-statistics: yes" to unbound.conf.
Code:
unbound-control stats_noreset | grep "^mem\.cache\."
Values reported are in bytes. I've removed all the custom cache-size, slabs and threads config and running with defaults to see how the performance goes. So far my caches are small, but I've been bouncing around a lot between unbound and nextdns.
Code:
mem.cache.rrset=541352
mem.cache.message=155800
Damn. Eventually, I get the same thing. It just takes several queries to finally hit thread 2 - and their caches are not synced. I believe this is by design (or config) as if you look at all stats, it will show thread 0 and thread 1 cache hits / miss. Usually it will hit the cache you filled first, but sometimes it will query the second thread separately and cache there as well. After that, all from cache(s). Unsure why they do not share the cache - but I will reset everything tonight and try from scratch.
 
Status
Not open for further replies.

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top