What's new

Unbound unbound_manager (Manager/Installer utility for unbound - Recursive DNS Server)

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

What are the other 4 tools you used for validation? Would you mind listing these here please?






and the one you already have just for sake of having all for reference:

 





and the one you already have just for sake of having all for reference:


I apologize but these tools are for DNSSEC validation and I just realized that I posted here instead of the other relevant post I was reading.

Feel free to delete or disregard.

thank you.
 
It appears that this is specific to Rootcanary. When using other validation tools (four total), I get successful validation.

My apologies to Martineau for taking up your valuable time. Thank you for what you do. You rock!
I see the same with 3.19 and 3.20. Is anyone able to see Rootcanary working as before or is this non-functional for everyone?

Performing: "dig com. SOA +dnssec" does show the AD flag indicating dnssec is functioning:
Code:
; <<>> DiG 9.16.6 <<>> txt com.

;; global options: +cmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28368

;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1


;; OPT PSEUDOSECTION:

; EDNS: version: 0, flags:; udp: 1232

;; QUESTION SECTION:

;com.                           IN      TXT


;; AUTHORITY SECTION:

com.                    900     IN      SOA     a.gtld-servers.net. nstld.verisign-grs.com. 1601604595 1800 900 604800 86400


;; Query time: 19 msec

;; SERVER: 127.0.1.1#53(127.0.1.1)

;; WHEN: Thu Oct 01 22:10:23 EDT 2020

;; MSG SIZE  rcvd: 105



; <<>> DiG 9.16.6 <<>> com. @127.0.0.1 -p 53

;; global options: +cmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50881

;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1


;; OPT PSEUDOSECTION:

; EDNS: version: 0, flags:; udp: 1472

;; QUESTION SECTION:

;com.                           IN      A


;; AUTHORITY SECTION:

com.                    1200    IN      SOA     a.gtld-servers.net. nstld.verisign-grs.com. 1601604610 1800 900 604800 86400


;; Query time: 39 msec

;; SERVER: 127.0.0.1#53(127.0.0.1)

;; WHEN: Thu Oct 01 22:10:23 EDT 2020

;; MSG SIZE  rcvd: 105
 
I see the same with 3.19 and 3.20. Is anyone able to see Rootcanary working as before or is this non-functional for everyone?

Rootcanary test definitely broken - have nudged developer to fix ;).
 
I see the same with 3.19 and 3.20. Is anyone able to see Rootcanary working as before or is this non-functional for everyone?

Performing: "dig com. SOA +dnssec" does show the AD flag indicating dnssec is functioning:
Code:
; <<>> DiG 9.16.6 <<>> txt com.

;; global options: +cmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28368

;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1


;; OPT PSEUDOSECTION:

; EDNS: version: 0, flags:; udp: 1232

;; QUESTION SECTION:

;com.                           IN      TXT


;; AUTHORITY SECTION:

com.                    900     IN      SOA     a.gtld-servers.net. nstld.verisign-grs.com. 1601604595 1800 900 604800 86400


;; Query time: 19 msec

;; SERVER: 127.0.1.1#53(127.0.1.1)

;; WHEN: Thu Oct 01 22:10:23 EDT 2020

;; MSG SIZE  rcvd: 105



; <<>> DiG 9.16.6 <<>> com. @127.0.0.1 -p 53

;; global options: +cmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50881

;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1


;; OPT PSEUDOSECTION:

; EDNS: version: 0, flags:; udp: 1472

;; QUESTION SECTION:

;com.                           IN      A


;; AUTHORITY SECTION:

com.                    1200    IN      SOA     a.gtld-servers.net. nstld.verisign-grs.com. 1601604610 1800 900 604800 86400


;; Query time: 39 msec

;; SERVER: 127.0.0.1#53(127.0.0.1)

;; WHEN: Thu Oct 01 22:10:23 EDT 2020

;; MSG SIZE  rcvd: 105

Yes, definitely broken...
Screenshot 2020-10-02 at 9.07.32 am.png
 
Getting error on install

unbound-checkconf: error while loading shared libraries: libevent-2.1.so.7: cannot open shared object file: No such file or directory

***ERROR INVALID unbound configuration - use option 'vx' to correct 'unbound.conf' or 'rl' to load a valid configuration file

or 'e' exit; then issue debug command

unbound -dv
 
same error. I've not done anything and it was running fine when checked last time which was couple weeks back.
 
Getting error on install

unbound-checkconf: error while loading shared libraries: libevent-2.1.so.7: cannot open shared object file: No such file or directory

***ERROR INVALID unbound configuration - use option 'vx' to correct 'unbound.conf' or 'rl' to load a valid configuration file

or 'e' exit; then issue debug command

unbound -dv

Try:

opkg remove libunbound-light
 
Try:

opkg remove libunbound-light

SUSWRT-Merlin RT-AX88U 384.19_0 Fri Aug 14 19:20:07 UTC 2020
admin1@RT-AX88U:/tmp/home/root# opkg remove libunbound-light
No packages removed.
Collected errors:
* print_dependents_warning: Package libunbound-light is depended upon by packages:
* print_dependents_warning: unbound-daemon
* print_dependents_warning: These might cease to work if package libunbound-light is removed.

* print_dependents_warning: Force removal of this package with --force-depends.
* print_dependents_warning: Force removal of this package and its dependents
* print_dependents_warning: with --force-removal-of-dependent-packages.
 
Rootcanary test definitely broken - have nudged developer to fix ;).

Got auto-response from developer to say ... on leave until 12th October ... :(
 
With unbound 1.12.0 soon, I thought I'd highlight the changes for discussion.

There are many nice memory leak fixes. :) The changes to the config terminology still maintain backwards compatibility - this will be important. I don't want to take the discussion in this direction, but I think it's important they didn't break everyone's (especially unbound_manager's) configurations overnight.

What I wanted to highlight was:
The default config for unbound_manager sets edns-buffer-size to 1472, but the new default under the recommendation of DNS flag day (https://dnsflagday.net/2020/) for this setting is being reduced to 1232 and I wonder if this is a bit too far, but I'd like to see some data on udp vs tcp dns performance.

It wasn't long ago that this was defaulted to 4096 in unbound and this had a performance boosting potential. (Particularly for DNSSEC - an important thing to use but having DNSSEC enabled makes the messages bigger). By default, or at least before this upcoming release, Unbound tries 4096 and if it fails, tries again at 1232 for IPv6 and 1472 for IPv4.
The balancing act is picking a number that avoids fragmentation while also minimising having to fallback to TCP.
The exact number to use for this setting has been debated, but there was a inkling it should go a little lower. If DNS can't manage sending what it needs on UDP, it can fallback to TCP which is a bit slower (I will say that in practice I only see it happen on scientific journal sites etc).

An EDNS buffer size of 1232 bytes will avoid fragmentation on nearly all [ipv6 or ipv4] current networks.
However - this is accounting for the smaller buffer size needed for ipv6 and if you exclusively use ipv4, this number is perhaps too conservative.
Is it throwing away performance for the sake of conforming? Some set it all the way down to 512 to guarantee fragmentation won't happen (all but forcing slower TCP).

See older notes for the setting:
edns-buffer-size: <number> Number of bytes size to advertise as the EDNS reassembly buffer size. This is the value put into datagrams over UDP towards peers. The actual buffer size is determined by msg-buffer-size (both for TCP and UDP). Do not set higher than that value. Default is 4096 which is RFC recommended. If you have fragmentation reassembly problems, usually seen as timeouts, then a value of 1480 can fix it. Setting to 512 bypasses even the most stringent path MTU problems, but is seen as extreme, since the amount of TCP fallback generated is excessive (probably also for this resolver, consider tuning the outgoing tcp number).

In short, I'll be interested to see whether you think unbound_manager sticks to 1472, but changes like this (on servers in general) are likely to mean a shift, however marginal, in DNS resolvers switching to TCP more often. I will continue to push my luck with 4096 but will probably relent to 1472 in the end on my ipv4-exclusive setup. Perhaps the script could set 1232 if do-ip6: yes?
 
The default config for unbound_manager sets edns-buffer-size to 1472, but the new default under the recommendation of DNS flag day (https://dnsflagday.net/2020/) for this setting is being reduced to 1232 and I wonder if this is a bit too far, but I'd like to see some data on udp vs tcp dns performance.

It wasn't long ago that this was defaulted to 4096 in unbound and this had a performance boosting potential. (Particularly for DNSSEC - an important thing to use but having DNSSEC enabled makes the messages bigger). By default, or at least before this upcoming release, Unbound tries 4096 and if it fails, tries again at 1232 for IPv6 and 1472 for IPv4.

I will continue to push my luck with 4096 but will probably relent to 1472 in the end on my ipv4-exclusive setup.

Perhaps the script could set 1232 if do-ip6: yes?
I don't have IPv6 available from my ISP and apparently neither does @Linux_Chemist? , but I have posted beta 'unbound_manager v3.21b' on the Github 'dev' branch

Advanced IPv6 tinkerers can test the beta script using
Code:
e  = Exit Script [?]

A:Option ==> uf dev

    unbound_manager.sh downloaded successfully Github 'dev/development' branch

unbound Manager UPDATE Complete! 843c8cb62ae6a4a1a2b7591d9c45efec

If you have a custom 'unbound.conf v1.10' I suggest you first make a backup
Code:
e  = Exit Script [?]

A:Option ==> vb

Active 'unbound.conf' backed up to '/opt/share/unbound/configs/yyyymmdd-hhmmss_unbound.conf'
then download 'unbound.conf v1.11' from the Github 'dev' branch (which contains @Linux_Chemist's proposed default IPv6 'edns-buffer-size: 1232') and force the script to apply the edns-buffer-size change if IPv6 is enabled.
Code:
e  = Exit Script [?]

A:Option ==> i dev

Retrieving Custom unbound configuration
    unbound.conf downloaded successfully Github 'dev/development' branch

<snip>
You can check if the IPv4 value had been overridden, use
Code:
e  = Exit Script [?]

A:Option ==> oq edns-buffer-size

unbound-control 'edns-buffer-size' '1232'


EDIT: For non-IPv6 environments, I forgot to state that beta 'unbound.conf v1.11' also includes the following changes
Code:
# v1.11 Martineau - Add     'private-address: ::/0' Enhance 'do-ip6: no' i.e. explicitly drop ALL IPv6 responses
#                 - Add     IPv6 Private Address 'fd00::/8' and 'fe80::/10'
 
Last edited:
With unbound 1.12.0 soon, I thought I'd highlight the changes for discussion.

There are many nice memory leak fixes. :) The changes to the config terminology still maintain backwards compatibility - this will be important. I don't want to take the discussion in this direction, but I think it's important they didn't break everyone's (especially unbound_manager's) configurations overnight.

What I wanted to highlight was:
The default config for unbound_manager sets edns-buffer-size to 1472, but the new default under the recommendation of DNS flag day (https://dnsflagday.net/2020/) for this setting is being reduced to 1232 and I wonder if this is a bit too far, but I'd like to see some data on udp vs tcp dns performance.

It wasn't long ago that this was defaulted to 4096 in unbound and this had a performance boosting potential. (Particularly for DNSSEC - an important thing to use but having DNSSEC enabled makes the messages bigger). By default, or at least before this upcoming release, Unbound tries 4096 and if it fails, tries again at 1232 for IPv6 and 1472 for IPv4.
The balancing act is picking a number that avoids fragmentation while also minimising having to fallback to TCP.
The exact number to use for this setting has been debated, but there was a inkling it should go a little lower. If DNS can't manage sending what it needs on UDP, it can fallback to TCP which is a bit slower (I will say that in practice I only see it happen on scientific journal sites etc).

However - this is accounting for the smaller buffer size needed for ipv6 and if you exclusively use ipv4, this number is perhaps too conservative.
Is it throwing away performance for the sake of conforming? Some set it all the way down to 512 to guarantee fragmentation won't happen (all but forcing slower TCP).

See older notes for the setting:

In short, I'll be interested to see whether you think unbound_manager sticks to 1472, but changes like this (on servers in general) are likely to mean a shift, however marginal, in DNS resolvers switching to TCP more often. I will continue to push my luck with 4096 but will probably relent to 1472 in the end on my ipv4-exclusive setup. Perhaps the script could set 1232 if do-ip6: yes?
these numbers (the 1472, particularly) share a possibly striking similarity to ~1500 MTU (and MRU) numbers that are germane if you are a user of (Cake-)QoS...and 4096 hits me as the "larger" frame size on the way up to Jumbo (9000bits?). If I'm on any way near a right track with this, I would tend to think that if unbound is caching but recursively queries authoritative servers, wouldn't it be best to align with whatever THEY use on our networks? (I like these types of math-logic problems)
what happens if I choose "larger frames" for my LAN and my ISP wants to see 1492b per packet on the WAN connection...does the router break things down before they go out?
I'm Native v6 with my ISP. if you want some help testing things, DM me...I'm not sure I understand all of this, but I'll dig in and do my best
 
Until the new version of unbound is available, do you recommend to reboot the 86u regularly in order to clean up the memory leak issues? I'm thinking every other day is often enough. Comment?

Also, has the 86u reboot issue been solved yet, where sometimes a reboot causes the router to just shut off, and requires manual intervention to restart?
 
I don't have IPv6 available from my ISP and apparently neither does @Linux_Chemist? , but I have posted beta 'unbound_manager v3.21b' on the Github 'dev' branch
True that. Talk talk UK is my ISP and they've just put it off for years and years and respond to the question with 'they're doing everything they can to make their ipv4 systems good'. ;) End result: no ipv6 to speak of unless you feel like setting up v4-to-v6 tunnelling (I don't lol)

I meant to say, sorry it wasn't terribly clear, I don't mind what you'd like to set the config to for this setting, I just noticed that this is what unbound is now set to use by default and wondered if you'd like to follow suit. Personally, I vote for whatever gives best performance :p Unbound slapping a lower number as the default just seems a bit heavy handed a change when ipv4 was perfectly happy with 1472.

these numbers (the 1472, particularly) share a possibly striking similarity to ~1500 MTU (and MRU) numbers that are germane if you are a user of (Cake-)QoS...and 4096 hits me as the "larger" frame size on the way up to Jumbo (9000bits?). If I'm on any way near a right track with this, I would tend to think that if unbound is caching but recursively queries authoritative servers, wouldn't it be best to align with whatever THEY use on our networks? (I like these types of math-logic problems)
what happens if I choose "larger frames" for my LAN and my ISP wants to see 1492b per packet on the WAN connection...does the router break things down before they go out?
I'm Native v6 with my ISP. if you want some help testing things, DM me...I'm not sure I understand all of this, but I'll dig in and do my best
They do share a similarity - I think it's something like "you can accept 4096 since it's udp but if you're falling back to tcp, then you're confined to a sensible mtu value like ethernet's 1500 minus the usual packet restrictions". If the receiving end can't handle the size you've chosen, it fails, the resend creates the timeout, then it tries again with a lower value (and tcp guarantees the old handshake that it got the message).
Ultimately, it's all about picking a standard and it seems that standard is now 1472 for ipv4 and 1232 for ipv6, with the lower number for a mixed network. Will performance suffer? potentially but it has to be consensus driven and the dns overlords have spoken lol I will abide. Long may they reign.

EDIT: Using tcp instead of udp can cut throughput for DNS requests to 1/3 (https://dnsprivacy.org/wiki/pages/viewpage.action?pageId=17629326) - using tcp will be better congestion control but does the small number of dns requests on a home network really make a difference to congestion? :p
There's a graph showing that using tcp with powerdns (dnsdist) rather than unbound actually had improved performance vs udp - will definitely be looking into that later for what's going on there.

Until the new version of unbound is available, do you recommend to reboot the 86u regularly in order to clean up the memory leak issues? I'm thinking every other day is often enough. Comment?

Also, has the 86u reboot issue been solved yet, where sometimes a reboot causes the router to just shut off, and requires manual intervention to restart?

Is your unbound instance suffering from memory leaks? I've had nothing bad on my end like that? You shouldn't need to reboot so often (if at all). If you're having trouble, you can check your ram usage with the free command.
The issues patched in the upcoming version are in all likelihood very minor leaks that may never actually trigger in practice. Unless there's a demonstrable problem, it definitely doesn't merit daily restarting lol You can restart unbound itself with unbound_manager advanced followed by the r command.
 
Last edited by a moderator:
They do share a similarity - I think it's something like "you can accept 4096 since it's udp but if you're falling back to tcp, then you're confined to a sensible mtu value like ethernet's 1500 minus the usual packet restrictions". If the receiving end can't handle the size you've chosen, it fails, the resend creates the timeout, then it tries again with a lower value (and tcp guarantees the old handshake that it got the message).
Ultimately, it's all about picking a standard and it seems that standard is now 1472 for ipv4 and 1232 for ipv6, with the lower number for a mixed network. Will performance suffer? potentially but it has to be consensus driven and the dns overlords have spoken lol I will abide. Long may they reign.

EDIT: Using tcp instead of udp can cut throughput for DNS requests to 1/3 (https://dnsprivacy.org/wiki/pages/viewpage.action?pageId=17629326) - using tcp will be better congestion control but does the small number of dns requests on a home network really make a difference to congestion? :p
There's a graph showing that using tcp with powerdns (dnsdist) rather than unbound actually had improved performance vs udp - will definitely be looking into that later for what's going on there.

After my addition to that part of our discussion here, I went into my Network settings on my *ubuntu desktop and changed MTU in the Ethernet tab from Auto to 4096, and selected the previously unselected Multicast. I did some surfing, left it alone for a few hours after that, and now I've checked my connmon page in the GUI for packet loss. None, nada, zilch, zero. But now it seems much more...aggressive, or willing to move or something, somehow, on page loads - even on the one website I go to that has a woefully slow server. I DO have "prefer IPv6" selected, so that might have something to do with it.
Congestion control - is that even an issue on a SOHO network running unbound? I think not.
 
After my addition to that part of our discussion here, I went into my Network settings on my *ubuntu desktop and changed MTU in the Ethernet tab from Auto to 4096, and selected the previously unselected Multicast. I did some surfing, left it alone for a few hours after that, and now I've checked my connmon page in the GUI for packet loss. None, nada, zilch, zero. But now it seems much more...aggressive, or willing to move or something, somehow, on page loads - even on the one website I go to that has a woefully slow server. I DO have "prefer IPv6" selected, so that might have something to do with it.
Congestion control - is that even an issue on a SOHO network running unbound? I think not.
I'm beginning to think I should put these wordier thoughts around unbound/dns into separate posts :D

I think you and I are quite kindred in our love of tweaking you know ;) I wouldn't mess with MTU though - the lower number of you to your router vs router to your isp etc etc will end up being the bottleneck and if it's too big, it'll cause fragmentation and a lot of icmp type 3 packets in response after it to say 'woahhh boy, get that mtu down!'
Jumbo frames purely on the LAN side is cool though - and can lead to some crazy transfer speeds. Internet's not ready for 9kb packet sizes lol

The gist I've gotten from reading seems to indicate that you can curb a lot of the performance degradation from having to do dns over tcp vs udp, by having the program utilise cpu threads well - unfortunately that's not going to be much use on this kind of hardware but on a custom setup..? :) Maybe a little ryzen with at least 8 threads? Powerdns may be better at working asynchronously than Unbound, or it does its lookups in some different funky way, but it's very interesting that that testing I linked got better throughput on it for tcp than udp somehow. If that wasn't just an anomaly, there might be some crucial takeaway from it for Unbound if we're all migrating into a more 'dns over tcp' future.

I'm gonna do a little testing and use unbound_manager's stats this weekend.
Router looks like: with 4096 and 16 days uptime, my queries = 41693 (39493 cache hits, 2200 misses = 94% Nice eh? :cool:). My total.tcpusage is at 0.
If I play around with the edns setting a bit, is there a variable in the stats I could use to indicate if it's too high and having to fallback to 1472 or use tcp (slow to resolve)? total.recursion.time.avg ? Or can I just check the times to domains on drill directly?

I think I've spent a bit too long reading about this already, I've had enough lol but I'll leave some sources that led to these upstream changes for those interested. I'm gonna settle on 1432 for a bit and see how it goes, but 1472 appears harmless enough (because i've found 4096 harmless enough at that). Doesn't really matter until you hit a website that won't load and will need it lowered, but I've yet to find one!

The new standard (1232 for all?)
(https://tools.ietf.org/id/draft-madi-dnsop-udp4dns-00.xml)

Big discussion on all the different values different resolvers etc are using

Code:
"512"  => "512: IPv4 Minimum",
"1220" => "1220: NSD IPv6 EDNS Minimum",
"1232" => "1232: IPv6 Minimum",
"1432" => "1432: 1500 Byte MTU",
^
|
v
"1480" => "1480: NSD IPv4 EDNS Minimum",
"4096" => "4096: (Old) Unbound Default"
from here
 
Last edited by a moderator:
I don't have IPv6 available from my ISP and apparently neither does @Linux_Chemist? , but I have posted beta 'unbound_manager v3.21b' on the Github 'dev' branch

Advanced IPv6 tinkerers can test the beta script using
Code:
e  = Exit Script [?]

A:Option ==> uf dev

    unbound_manager.sh downloaded successfully Github 'dev/development' branch

unbound Manager UPDATE Complete! 843c8cb62ae6a4a1a2b7591d9c45efec

If you have a custom 'unbound.conf v1.10' I suggest you first make a backup
Code:
e  = Exit Script [?]

A:Option ==> vb

Active 'unbound.conf' backed up to '/opt/share/unbound/configs/yyyymmdd-hhmmss_unbound.conf'
then download 'unbound.conf v1.11' from the Github 'dev' branch (which contains @Linux_Chemist's proposed default IPv6 'edns-buffer-size: 1232') and force the script to apply the edns-buffer-size change if IPv6 is enabled.
Code:
e  = Exit Script [?]

A:Option ==> i dev

Retrieving Custom unbound configuration
    unbound.conf downloaded successfully Github 'dev/development' branch

<snip>
You can check if the IPv4 value had been overridden, use
Code:
e  = Exit Script [?]

A:Option ==> oq edns-buffer-size

unbound-control 'edns-buffer-size' '1232'


EDIT: For non-IPv6 environments, I forgot to state that beta 'unbound.conf v1.11' also includes the following changes
Code:
# v1.11 Martineau - Add     'private-address: ::/0' Enhance 'do-ip6: no' i.e. explicitly drop ALL IPv6 responses
#                 - Add     IPv6 Private Address 'fd00::/8' and 'fe80::/10'
I can update to 3.21b successfully, but once I try to install 'unbound.conf v1.11' using
A:Option ==> i dev

Then I get
Code:
Retrieving Custom unbound configuration
        unbound.conf downloaded successfully Github 'dev/development' branch
        doc/example.conf.in downloaded successfully
Checking IPv6.....
Customising unbound IPv6 configuration.....
Customising unbound configuration Options:

Do you want to ENABLE unbound logging? (NO recommended)

        Reply 'y' or press ENTER  to skip
y
unbound Logging enabled - 'verbosity: 1'
/opt/var/lib/unbound/unbound.conf:153: error: cannot open include file '/opt/var/lib/unbound/adblock/adservers': No such file or directory
/opt/var/lib/unbound/unbound.conf:212: error: cannot open include file '/opt/share/unbound/configs/unbound.conf.addgui': No such file or directory
/opt/var/lib/unbound/unbound.conf:215: error: cannot open include file '/opt/share/unbound/configs/unbound.conf.localhosts': No such file or directory
/opt/var/lib/unbound/unbound.conf:221: error: cannot open include file '/opt/share/unbound/configs/unbound.conf.add': No such file or directory
read /opt/var/lib/unbound/unbound.conf failed: 4 errors in configuration file
Restarting dnsmasq.....
Done.

***ERROR FATAL...ABORTing!
Edit: and after I rem the items above and try to start unbound it is still failing - from debug it looks like the unbound.conf is also set for dnsmasq disable. After comparing / editing the 1.10 backup and 1.11 was able to fix and is now working okay.
 
Last edited:
I can update to 3.21b successfully, but once I try to install 'unbound.conf v1.11' using
A:Option ==> i dev

Then I get
Code:
Retrieving Custom unbound configuration
        unbound.conf downloaded successfully Github 'dev/development' branch
        doc/example.conf.in downloaded successfully
Checking IPv6.....
Customising unbound IPv6 configuration.....
Customising unbound configuration Options:

Do you want to ENABLE unbound logging? (NO recommended)

        Reply 'y' or press ENTER  to skip
y
unbound Logging enabled - 'verbosity: 1'
/opt/var/lib/unbound/unbound.conf:153: error: cannot open include file '/opt/var/lib/unbound/adblock/adservers': No such file or directory
/opt/var/lib/unbound/unbound.conf:212: error: cannot open include file '/opt/share/unbound/configs/unbound.conf.addgui': No such file or directory
/opt/var/lib/unbound/unbound.conf:215: error: cannot open include file '/opt/share/unbound/configs/unbound.conf.localhosts': No such file or directory
/opt/var/lib/unbound/unbound.conf:221: error: cannot open include file '/opt/share/unbound/configs/unbound.conf.add': No such file or directory
read /opt/var/lib/unbound/unbound.conf failed: 4 errors in configuration file
Restarting dnsmasq.....
Done.

***ERROR FATAL...ABORTing!
Edit: and after I rem the items above and try to start unbound it is still failing - from debug it looks like the unbound.conf is also set for dnsmasq disable. After comparing / editing the 1.10 backup and 1.11 was able to fix and is now working okay.
Apologies :eek:

Thanks @Archiel, I removed the beta entries such as 'ipset' etc. and Adblock should not be enabled by default.
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top