geekweaver
New Around Here
Possible missing `idx` check in firewall_sdn.c causing MASQUERADE on main LAN (br0) — breaks bridged VMs and containers
Router Model: ASUS RT-BE82U
Firmware Version: 3.0.0.6.102_39254 (39254-gc155c5b_1531-g93455_BB0B)
Affected Branch: Potentially all ASUSWRT 5.0 / 3006.x branch models using the SDN firewall subsystem
Issue
I did also post this over on ZenTalk forums but would like to post here too.This is a new router that is replacing an older Orbi mesh. This is all transpired on the initial setup and replacement of my old router. While going through all of this, prior to finding this rule, I could swap back to my old Orbi and everything was perfect.
I've been troubleshooting an issue where inbound TCP connectivity to Proxmox VMs and Docker containers breaks after any firewall restart on my RT-BE82U. After digging into the iptables NAT table I found this rule being automatically generated:
Bash:
-A POSTROUTING -s 192.168.19.0/24 -d 192.168.19.0/24 -o br0 -j MASQUERADE
This appears to apply hairpin NAT to all traffic where source and destination are within the primary LAN subnet on br0. For users running Proxmox VMs, Docker containers, or any bridged virtual network interfaces, this breaks inbound TCP connectivity — destination hosts see the router's IP (192.168.19.1) as the source rather than the actual client, causing asymmetric routing and broken sessions.
Worth noting — ICMP (ping) continues to work fine, which makes this difficult to diagnose. Only stateful TCP connections fail, so it's easy to assume the network is healthy until you try to actually connect to something.
Trigger Conditions
The rule is re-injected not just on reboot but on any event that triggers a firewall restart, including:
- Router boot
- WAN connect/reconnect
- Any configuration save in the web UI that triggers firewall reload
- AiProtection enable/disable
What I Found in the Source
Looking at
release/src/router/rc/firewall_sdn.c I noticed that the DoT block correctly uses an idx check to skip the main bridge (br0, idx=0):
C:
#ifdef RTCONFIG_DNSPRIVACY
if (pmtl->nw_t.idx) //ignore br0
{
// DoT DNS redirect logic
fprintf(fp, "-A PREROUTING ...");
}
#endif
The NAT loopback block immediately following doesn't appear to have the same check.
Location:
release/src/router/rc/firewall_sdn.c, approximately line 509 in the current main branch. Line numbers may vary by version — search for the comment // NAT lookback to locate the exact line.
C:
ip2class((char *)(pmtl->nw_t.addr), (char *)(pmtl->nw_t.netmask), sdn_subnet_class);
// NAT lookback <-- possibly missing idx check here?
fprintf(fp, "-A POSTROUTING -s %s -d %s -o %s -j MASQUERADE\n",
sdn_subnet_class, sdn_subnet_class, pmtl->nw_t.ifname);
When idx=0 (the main br0 interface), this generates a MASQUERADE rule for LAN-to-LAN traffic which doesn't seem intentional — though I may be missing context about why this was done this way.
Possible Fix
If this is indeed an oversight, adding the same
idx check used by the DoT block above might resolve it:
C:
ip2class((char *)(pmtl->nw_t.addr), (char *)(pmtl->nw_t.netmask), sdn_subnet_class);
// NAT lookback
if (pmtl->nw_t.idx) //ignore br0
fprintf(fp, "-A POSTROUTING -s %s -d %s -o %s -j MASQUERADE\n",
sdn_subnet_class, sdn_subnet_class, pmtl->nw_t.ifname);
This would ensure NAT loopback only applies to guest/IoT/VPN SDN profiles (idx > 0) and leaves the primary bridge untouched.
Verification
Manually deleting the rule immediately restores full VM/container connectivity:
Bash:
iptables -t nat -D POSTROUTING -s 192.168.19.0/24 -d 192.168.19.0/24 -o br0 -j MASQUERADE
The rule returns on any firewall restart, confirming it is being procedurally generated rather than being a one-time misconfiguration.
Before fix — iptables POSTROUTING:
Code:
Chain POSTROUTING (policy ACCEPT 33 packets, 4922 bytes)
num pkts bytes target prot opt in out source destination
1 691 53710 MASQUERADE all -- * vlan201 ![public IP] 0.0.0.0/0
2 1598 148K MASQUERADE all -- * br0 192.168.19.0/24 192.168.19.0/24
Before fix — netstat on client (192.168.19.122 connecting to VM at 192.168.19.6):
Code:
TCP 192.168.19.122:49250 192.168.19.6:53 FIN_WAIT_1
TCP 192.168.19.122:49666 192.168.19.6:53 FIN_WAIT_1
TCP 192.168.19.122:49667 192.168.19.6:53 FIN_WAIT_1
TCP 192.168.19.122:49747 192.168.19.6:53 ESTABLISHED
TCP 192.168.19.122:49919 192.168.19.6:53 FIN_WAIT_1
TCP 192.168.19.122:50382 192.168.19.6:53 FIN_WAIT_1
TCP 192.168.19.122:50855 192.168.19.6:443 ESTABLISHED
Multiple TCP connections stuck in FIN_WAIT_1 — sessions cannot complete teardown. Wireshark shows a flood of TCP retransmissions.
After fix — iptables POSTROUTING:
Code:
Chain POSTROUTING (policy ACCEPT 43 packets, 4755 bytes)
num pkts bytes target prot opt in out source destination
1 950 82286 MASQUERADE all -- * vlan201 ![public IP] 0.0.0.0/0
After fix — netstat on client:
Code:
TCP 192.168.19.122:51783 192.168.19.6:53 TIME_WAIT
TCP 192.168.19.122:52022 192.168.19.6:8300 ESTABLISHED
TCP 192.168.19.122:52148 192.168.19.6:443 ESTABLISHED
TCP 192.168.19.122:52160 192.168.19.6:53 TIME_WAIT
TCP 192.168.19.122:52376 192.168.19.6:53 TIME_WAIT
Connections now completing normally — FIN_WAIT_1 replaced by clean TIME_WAIT states. Wireshark shows clean traffic with no retransmissions.
Hoping someone more familiar with the codebase can confirm whether this is intentional or an oversight.
Current Workaround
For anyone hitting this in the meantime — a scheduled task running every few minutes on an always-on Linux host on your network:
Bash:
ssh [email protected] 'iptables -t nat -D POSTROUTING -s 192.168.19.0/24 -d 192.168.19.0/24 -o br0 -j MASQUERADE 2>/dev/null'
This can be added to cron or run as a Home Assistant shell_command on a schedule.
Has Anyone Else Seen This?
I'm curious whether others have run into this, particularly those running:
- Proxmox or any hypervisor with bridged VMs
- Docker on a home server connected to an ASUS WiFi 7 router
- NAS devices running Docker containers (Synology, TrueNAS, Unraid)
- Any setup where devices have virtual MAC addresses behind a Linux bridge
The symptom is easy to miss — ping works fine but TCP connections to your VMs or containers fail or time out, and everything works perfectly if you switch to a different router. If you've experienced this and assumed it was a misconfiguration on your server side, it may actually be this rule.
Running
iptables -t nat -L POSTROUTING -n -v on your router will show the rule if it's present.
Last edited: