What's new
  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

After installing AdGuardHome, I noticed a few things that didn't seem right, here are my notes

lopperman

New Around Here
Disclaimer: I'm by no means a security expert, and I acknowledge that some of the issues I found after installing AdGuardHome could be caused by my own configuration mistakes, or might have nothing to do with AdGuardHome.

I installed AdGuardHome today (version 1.9.3, via amtm). I'd been monitoring the output, and making adjustments to dns block lists, and generally looking around in the AdGuardHome settings. Something caught my eye when I was reviewing the Settings --> Client Settings. Some non-local ip addresses where showing up. Specifically: 167.94.138.178, and 66.132.153.155. Both of those resolve to census-scanner.com. What was confusing isn't that they were trying to 'knock on my door', but in fact they were accessing the locally running dns server that is installed with AdGuard.

Apparently, when AdGuardHome was installed, it was available from the WAN and allowed incoming UDP/TCP traffic on port 53. I don't believe by default that should ever be allowed, and and at least for my home network, I never need my local dns service to be available from the WAN. I fixed this issue by blocking incoming UDP/TCP port 53 for WAN traffic. It appears that AdGuard home ships with 0.0.0.0 as the default bind address for setup. It's possible that there's an assumption that the default firewall would block unsolicited WAN traffic to port 53, but in my case (I have a public ip that gets assigned to my WAN port from my Spectrum cable/modem), it was wide open. There's not legitimate reason I could find to expose your local dns service to the wan. It should only be needed for your home devices, or devices connected via VPN (if configured).

In my case, adding explicit drop rules for port 53 from eth0 (WAN Interface), solved the problem. (also added to /jffs/scrips/firewall-start to persist across reboots).

After I did that, I decided to dig a bit deeper, and found that AdGuard Home Web UI (which I'm running on default port 3000), was also exposed to the WAN. I blocked that by adding a drop rule for TCP/UDP Port 3000 from the WAN.

Probably unrelated to AdGuard Home, I also found that Port 21 (FTP) and Port 9999 (infosvr - known ASUS vulnerability), and Port 202 (what I running for local ssh only) were also open to the WAN, so I added drop rules for those as well.

Wondering if anyone else has had similar experiences
 
Last edited:
Disclaimer: I'm by no means a security expert, and I acknowledge that some of the issues I found after installing AdGuardHome could be cause by my own configuration mistakes, or might have nothing to do with AdGuardHome.

I installed AdGuardHome today (version 1.9.3, via amtm). I'd been monitoring the output, and making adjustments to dns block lists, and generally looking around in the AdGuardHome settings. Something caught my eye when I was reviewing the Settings --> Client Settings. Some non-local ip addresses where showing up. Specifically: 167.94.138.178, and 66.132.153.155. Both of those resolve to census-scanner.com. What was confusing isn't that they were trying to 'knock on my door', but in fact they were accessing the locally running dns server that is installed with AdGuard.

Apparently, when AdGuardHome was installed, it was available from the WAN and allowed incoming UDP/TCP traffic on port 53. I don't believe by default that should ever be allowed, and and at least for my home network, I never need my local dns service to be available from the WAN. I fixed this issue by blocking incoming UDP/TCP port 53 for WAN traffic. It appears that AdGuard home ships with 0.0.0.0 as the default bind address for setup. It's possible that there's an assumption that the default firewall would block unsolicited WAN traffic to port 53, but in my case (I have a public ip that gets assigned to my WAN port from my Spectrum cable/modem), it was wide open. There's not legitimate reason I could find to expose your local dns service to the wan. It should only be needed for your home devices, or devices connected via VPN (if configured).

In my case, adding explicit drop rules for port 53 from eth0 (WAN Interface), solved the problem. (also added to /jffs/scrips/firewall-start to persist across reboots).

After I did that, I decided to dig a bit deeper, and found that AdGuard Home Web UI (which I'm running on default port 3000), was also exposed to the WAN. I blocked that by adding a drop rule for TCP/UDP Port 3000 from the WAN.

Probably unrelated to AdGuard Home, I also found that Port 21 (FTP) and Port 9999 (infosvr - known ASUS vulnerability), and Port 202 (what I running for local ssh only) were also open to the WAN, so I added drop rules for those as well.

Wondering if anyone else has had similar experiences
FYI, here's the changes I made to the /jffs/scripts/firewall-start:
Code:
File: /jffs/scripts/firewall-start                   
     ---
     #!/bin/sh
     # Block external access to services from WAN (eth0)

     # DNS (AdGuard Home)
     iptables -I INPUT -i eth0 -p udp --dport 53 -j DROP
     iptables -I INPUT -i eth0 -p tcp --dport 53 -j DROP

     # AdGuard Home web UI
     iptables -I INPUT -i eth0 -p tcp --dport 3000 -j DROP

     # FTP
     iptables -I INPUT -i eth0 -p tcp --dport 21 -j DROP

     # infosvr (ASUS discovery - known vulnerability)
     iptables -I INPUT -i eth0 -p udp --dport 9999 -j DROP

     # SSH
     iptables -I INPUT -i eth0 -p tcp --dport 202 -j DROP

     # cfg_server (AiMesh)
     iptables -I INPUT -i eth0 -p tcp --dport 7788 -j DROP
     iptables -I INPUT -i eth0 -p udp --dport 7788 -j DROP

     # wsdd2 (Windows discovery)
     iptables -I INPUT -i eth0 -p tcp --dport 3702 -j DROP
     iptables -I INPUT -i eth0 -p udp --dport 3702 -j DROP

     # nmbd (NetBIOS)
     iptables -I INPUT -i eth0 -p udp --dport 137 -j DROP
     iptables -I INPUT -i eth0 -p udp --dport 138 -j DROP

     # avahi (mDNS/Bonjour)
     iptables -I INPUT -i eth0 -p udp --dport 5353 -j DROP
Code:
 
There is a difference between listening on 0.0.0.0 or public IP, and actually being accessible on those IPs and ports. The default firewall would not allow such incoming connections from outside. Do you see any rules that are allowing this incoming traffic? Dropping should be the default.
 
My WAN facing router (RT-BE92U) was not blocking the default (WAN TCP/UDP 3000) port used by AdGuard
You would have needed to disable the firewall or setup a custom rule to allow the traffic. How do you know it wasn't blocking? Did you test it from outside your network? Testing from your LAN isn't a valid test, even if using your WAN IP. This sounds pretty bad if it were true, but what's the evidence?
What does the INPUT chain look like now?
Code:
iptables -v -S INPUT
iptables -v -S logdrop
 
Last edited:
I'm having the same bug as well. I only realized a few days ago that I had thousands of requests in AdGuard from IPs that do not belong to my LAN.
 
I don't have the log of AdGuard anymore because I've tried to unistall and reinstall it to see if the ports were opened by it or not.
I can confirm that unistalling AGH port 53 and 30000 (the one of the Web UI) were correctly closed.

This is the result of the first command:
Code:
iptables -v -S INPUT
-P INPUT ACCEPT -c 292352 154143521
-A INPUT -i ppp0 -p tcp -m tcp --dport 1000 -c 3 156 -j DROP
-A INPUT -i ppp0 -p udp -m udp --dport 9999 -c 0 0 -j DROP
-A INPUT -i ppp0 -p tcp -m tcp --dport 21 -c 3 136 -j DROP
-A INPUT -i ppp0 -p tcp -m tcp --dport 30000 -c 3 180 -j DROP
-A INPUT -i ppp0 -p tcp -m tcp --dport 53 -c 4 224 -j DROP
-A INPUT -i ppp0 -p udp -m udp --dport 53 -c 6 396 -j DROP
-A INPUT -i tun11 -c 0 0 -j REJECT --reject-with icmp-port-unreachable
-A INPUT -i ppp0 -p tcp -m tcp --dport 1000 -c 0 0 -j DROP
-A INPUT -i ppp0 -p udp -m udp --dport 9999 -c 0 0 -j DROP
-A INPUT -i ppp0 -p tcp -m tcp --dport 21 -c 0 0 -j DROP
-A INPUT -i ppp0 -p tcp -m tcp --dport 30000 -c 0 0 -j DROP
-A INPUT -i ppp0 -p tcp -m tcp --dport 53 -c 0 0 -j DROP
-A INPUT -i ppp0 -p udp -m udp --dport 53 -c 0 0 -j DROP
-A INPUT -i tun11 -c 3 462 -j REJECT --reject-with icmp-port-unreachable

This one instead the result of the second command:
Code:
iptables -v -S logdrop
-N logdrop
-A logdrop -m state --state NEW -c 0 0 -j LOG --log-prefix "DROP " --log-tcp-sequence --log-tcp-options --log-ip-options
-A logdrop -c 0 0 -j DROP

I've already applied the drop rules suggested by @lopperman
 
sorry fory my late response...
After having checked the settings pages nothing looked unusual, so becuase I was already spending time to the router I decided to update the firmware to the 3006.102.6.
Immediately after the first boot after the update and even after a reboot, I started having some wired problems like random errors during DNS resolving of some domains and connections drop of my VPN (the router is the client) so I decided to do a factory reset.
Now everything seems working fine.

Really thanks for the help!!!
 
You would have needed to disable the firewall or setup a custom rule to allow the traffic. How do you know it wasn't blocking? Did you test it from outside your network? Testing from your LAN isn't a valid test, even if using your WAN IP. This sounds pretty bad if it were true, but what's the evidence?
What does the INPUT chain look like now?
Code:
iptables -v -S INPUT
iptables -v -S logdrop
I knew it wasn't blocking when I started seeing external ip addresses show up in my AdGuard home client list.

Because you asked:

iptables -v -S INPUT
Bash:
iptables -v -S INPUT
-P INPUT ACCEPT -c 0 0
-A INPUT -i eth0 -p tcp -m tcp --dport 8443 -c 3 132 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 3443 -c 0 0 -j DROP
-A INPUT -i eth0 -p udp -m udp --dport 853 -c 0 0 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 853 -c 0 0 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 445 -c 0 0 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 139 -c 0 0 -j DROP
-A INPUT -i eth0 -p udp -m udp --dport 5353 -c 0 0 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 443 -c 3 132 -j DROP
-A INPUT -i eth0 -p udp -m udp --dport 138 -c 0 0 -j DROP
-A INPUT -i eth0 -p udp -m udp --dport 137 -c 0 0 -j DROP
-A INPUT -i eth0 -p udp -m udp --dport 3702 -c 0 0 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 3702 -c 0 0 -j DROP
-A INPUT -i eth0 -p udp -m udp --dport 7788 -c 0 0 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 7788 -c 0 0 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 202 -c 0 0 -j DROP
-A INPUT -i eth0 -p udp -m udp --dport 9999 -c 0 0 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 21 -c 0 0 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 3000 -c 0 0 -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 53 -c 0 0 -j DROP
-A INPUT -i eth0 -p udp -m udp --dport 53 -c 0 0 -j DROP
-A INPUT -p udp -m udp --dport 53 -c 60 3892 -j URLFI
-A INPUT -p icmp -m icmp --icmp-type 8 -c 105 131640 -j INPUT_PING
-A INPUT -m state --state RELATED,ESTABLISHED -c 30077 5868841 -j ACCEPT
-A INPUT -m state --state INVALID -c 18 806 -j logdrop
-A INPUT ! -i br0 -c 13087 2772372 -j PTCSRVWAN
-A INPUT -i br0 -c 3484 542900 -j PTCSRVLAN
-A INPUT ! -i lo -p tcp -m tcp --dport 5152 -c 0 0 -j logdrop
-A INPUT -i br0 -m state --state NEW -c 3484 542900 -j ACCEPT
-A INPUT -i lo -m state --state NEW -c 12889 2760873 -j ACCEPT
-A INPUT -p udp -m udp --sport 67 --dport 68 -c 0 0 -j ACCEPT
-A INPUT -p icmp -c 0 0 -j INPUT_ICMP
-A INPUT -c 198 11499 -j WGSI
-A INPUT -c 198 11499 -j WGCI
-A INPUT -c 198 11499 -j OVPNSI
-A INPUT -c 198 11499 -j OVPNCI
-A INPUT -c 198 11499 -j SDN_FI
-A INPUT -c 198 11499 -j logdrop

iptables -v -S logdrop
Bash:
-N logdrop
-A logdrop -m state --state NEW -c 533 21995 -j LOG --log-prefix "DROP " --log-tcp-sequence --log-tcp-options --log-ip-options
-A logdrop -c 562 23241 -j DROP
 
For anyone still watching this thread, Just wanted to provided a high-level summary of what I saw, and what I did to correct things. It is possible that I had unknowing done something with a script that caused the bad behavior when I installed AdGuard, but I don't honestly think it was 'me'.

ISSUE SUMMARY
* Problem:
External IPs from internet scanners were appearing in my AdGuard Home DNS query logs. These were random external IP address making DNS queries to my router.
* Root Cause: AdGuard Home was binding to 0.0.0.0 (all interfaces), which included the WAN interface. This meant anyone on the internet could reach my DNS service on port 53.
* Corrections Made
AdGuard Home Configuration (from /opt/etc/AdGuardHome/AdGuardHome.yaml)

Note: Reminder that "10.10.3.1" is my primary router, so make the requisite changes

YAML:
  Changed from:
  http:
    address: 0.0.0.0:3000

  dns:
    bind_hosts:
      - 0.0.0.0

  Changed to:
  http:
    address: 10.10.3.1:3000

  dns:
    bind_hosts:
      - 10.10.3.1
      - 127.0.0.1

The change above restricts AdGuard to only listen on the internal LAN interface (10.10.3.1 in my case) and localhost, preventing any external access.

Key Takeaway: Always bind network services to specific internal IPs rather than 0.0.0.0 to prevent unintended external exposure, even when you believe your firewall should block it.

Why External IPs Got Through:
1. 0.0.0.0 means ALL interfaces - When AdGuard bound to 0.0.0.0:53, it was listening on:
- 10.10.3.1 (LAN, intended)
- 127.0.0.1 (localhost, intended)
- My WAN IP (not intended)
2. Firewall rules typically allow inbound port 53 -- Most router firewalls permit DNS traffic because the router itself needs to receive DNS responses from upstream servers. The firwall saw DNS packets arriving on port 53 and allowed them through as expected traffic.
3. The service was legitimately listening. From the firewall's perspective, this wasn't unsolicited traffic being blocked -- it was traffic destined for an open, listening port on the router itself.

By changing ADGuard to bind onbly to 10.10.3.1 and 127.0.0.1, the service simply doesn't listen on the WAN interface at all. External packets to port 53 on my WAN IP now have nothing to connect to, regardless of firewall rules.

Firewall rules are the second line of defense. The first line is ensuring services only bind to the interfaces they need.

EDIT: I did find this AdGuard issue that was opened/requested, and the request described the exact behavior that caused my problems (binding to 0.0.0.0:53).
 
Last edited:
@lopperman Your firewall changes are unnecessary and may negatively impact the functionality of the router.

1. As can be seen from your iptables output there are no explicit rules*** that would allow incoming traffic to port 53 or the other ports you're blocking. The final rule drops all traffic not explicitly allowed.

2. The DNS server needs to bind port 53 to 0.0.0.0 to allow DNS requests from dynamic interfaces, not just br0. Without this requests will fail from interfaces associated with VPNs, guest networks, SDNs, etc.

2. Firewall rules typically allow inbound port 53 -- Most router firewalls permit DNS traffic because the router itself needs to receive DNS responses from upstream servers. The firwall saw DNS packets arriving on port 53 and allowed them through as expected traffic.
This is incorrect. The router (or any other device) does not need to open port 53 to inbound traffic to receive DNS responses from upstream servers. The DNS requests are made from a random ephemeral port to port 53 on the upstream server.

Speculation on cause of your issue:

The only way external requests could make it through the firewall to your DNS server would be a) if there was an explicit rule added, or b) the firewall was disabled in the router settings (as was the case with @madaxx).

As there is currently nothing wrong with your firewall rules (when your modifications are ignored) it's difficult to guess what happened before. Two things come to mind as possibilities, a) malware (@madaxx latterly reported having "weird" DNS problems), or b) the AdGuardHome installation process modifies the firewall rules.

*** Check also the contents of the following iptables chains for incorrect rules:
Code:
iptables -v -S URLFI
iptables -v -S PTCSRVWAN
 
Last edited:
Checked my .yaml file on a perfectly working adguard instance and it binds to only my router's local ip, no mention of 0.0.0.0 or even local host.
 
@lopperman Your firewall changes are unnecessary and may negatively impact the functionality of the router.

1. As can be seen from your iptables output there are no explicit rules*** that would allow incoming traffic to port 53 or the other ports you're blocking. The final rule drops all traffic not explicitly allowed.

2. The DNS server needs to bind port 53 to 0.0.0.0 to allow DNS requests from dynamic interfaces, not just br0. Without this requests will fail from interfaces associated with VPNs, guest networks, SDNs, etc.


This is incorrect. The router (or any other device) does not need to open port 53 to inbound traffic to receive DNS responses from upstream servers. The DNS requests are made from a random ephemeral port to port 53 on the upstream server.

Speculation on cause of your issue:

The only way external requests could make it through the firewall to your DNS server would be a) if there was an explicit rule added, or b) the firewall was disabled in the router settings (as was the case with @madaxx).

As there is currently nothing wrong with your firewall rules (when your modifications are ignored) it's difficult to guess what happened before. Two things come to mind as possibilities, a) malware (@madaxx latterly reported having "weird" DNS problems), or b) the AdGuardHome installation process modifies the firewall rules.

*** Check also the contents of the following iptables chains for incorrect rules:
Code:
iptables -v -S URLFI
iptables -v -S PTCSRVWAN
Thank you for the detailed response and corrections.

You're right about the DNS response mechanism - I was imprecise. Responses from upstream DNS use the RELATED,ESTABLISHED rule, not an inbound port 53 rule.

I've checked the chains you mentioned:
- URLFI - empty
- PTCSRVWAN - empty
- PTCSRVLAN - empty
- No port forwarding rules
- UPnP chains empty

My current INPUT chain now has explicit DROP rules for port 53 (TCP/UDP) on eth0, plus the final logdrop catch-all. I added these after discovering the issue.

Regarding what originally allowed the traffic - I have a few theories:

1. AdGuard was binding to 0.0.0.0 - This meant it was listening on ALL interfaces including eth0 (WAN). If any packet reached the router's port 53, AdGuard would respond.
I've since changed it to bind only to 10.10.3.1 and 127.0.0.1.
2. Default Merlin firewall may not explicitly block port 53 - Before my custom firewall-start rules, I don't believe there was a specific DROP rule for port 53 on WAN. The
default ruleset relies on the final logdrop, but I'm not 100% certain there wasn't a gap somewhere.

I did not have the firewall disabled, and I doubt malware (though I can't rule it out completely). The AdGuard installation via amtm doesn't modify iptables directly.

What remains unclear is exactly how external packets reached the router's port 53 in the first place. The VSERVER chain (which handles WAN-to-router NAT) only had VUPNP
(empty), so there was no explicit port forward.
 
If any packet reached the router's port 53, AdGuard would respond.
It would be very unlikely that the USB drive with AGH would be mounted and started before the default firewall is applied.

What about the possibility of a logging bug in AGH? Maybe the data you saw in AGH wasn’t real. What volume of external requests are we talking about?
 

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!

Staff online

Members online

Back
Top