What's new
  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Tutorial [Scripts][2025-12-07] SSD TRIM, IPSet-based Firewall & Tunnel Director, IPv6 Enhancements, WG Port Forwarder, and more

Thanks for the update! Just tried out the new ssd_trim.sh script... sadly, it failed to see my Samsung SSD which is running inside a JMicron drive enclosure:

Code:
ViktorJp@GT-AX6000-3C88:/jffs/scripts/ssd# sh ssd_trim.sh
ssd_trim: Scanning /proc/mounts for USB-backed filesystems under /tmp/mnt
ssd_trim: Skipping drive=/dev/sda (marked non-SSD) for label=ASUS-SSD mount=/tmp/mnt/ASUS-SSD part=/dev/sda1 fs=ext4 vendor_id=152d vendor_name=JMicron
ssd_trim: No suitable USB-backed mountpoints detected under /tmp/mnt; nothing to trim. Supported filesystems: ext2, ext3, ext4

Is there still a way to force it to assume that this vendor_id is a legit SSD?
Interesting, maybe some enclosures incorrectly report that they are rotational drives. Can you try rerunning the script with this check removed?


If it works fine, I will remove this check completely. It’s not really required.
 
Interesting, maybe some enclosures incorrectly report that they are rotational drives. Can you try rerunning the script with this check removed?


If it works fine, I will remove this check completely. It’s not really required.
Thanks for the help! Yep, that worked fine:

Code:
ViktorJp@GT-AX6000-3C88:/jffs/scripts/ssd# sh ssd_trim.sh
ssd_trim: Scanning /proc/mounts for USB-backed filesystems under /tmp/mnt
ssd_trim: Detected SSD candidate: label=ASUS-SSD mount=/tmp/mnt/ASUS-SSD part=/dev/sda1 fs=ext4 vendor_id=152d vendor_name=JMicron
ssd_trim: Running fstrim on /tmp/mnt/ASUS-SSD...
ssd_trim: fstrim succeeded: /tmp/mnt/ASUS-SSD: 238916677632 bytes trimmed
ssd_trim: Successfully trimmed all selected SSD mountpoints

Changes made:

Code:
            # Build rich context (we'll reuse it for all checks)
            context="label=${label} mount=${m} part=${dev} fs=${fstype} vendor_id=${vid}"
            [ -n "$manufacturer" ] && context="${context} vendor_name=${manufacturer}"

            # 2) Rotational check (skip HDDs / "rotational" devices)
 #           if ! disk_passes_rotational_check "$disk_name" "$context"; then
 #               continue
 #           fi

            # 3) Filesystem support check (TRIM only on ext2/ext3/ext4)
            if ! filesystem_supports_trim "$dev" "$fstype" "$context"; then
                continue
            fi
 
Thanks for the feedback. I've merged the fix. The script will now immediately skip the disk only in cases of an unsupported filesystem.
Happy to help. Quick question... What would happen now in a situation where a rotational harddrive is being used if this check was removed? Will it fail elsewhere trying to run fstrim on that?
 
Happy to help. Quick question... What would happen now in a situation where a rotational harddrive is being used if this check was removed? Will it fail elsewhere trying to run fstrim on that?
It will try to trim the drive, fail, mark it in nvram as disabled (by generating a unique drive ID), and skip it during all subsequent runs.
Code:
kuchkovsky@rt:/tmp/home/root# /jffs/scripts/ssd/ssd_trim.sh
ssd_trim: Scanning /proc/mounts for USB-backed filesystems under /tmp/mnt
ssd_trim: Detected SSD candidate: label=st5 mount=/tmp/mnt/st5 part=/dev/sda1 fs=ext4 vendor_id=04e8 vendor_name=Samsung
ssd_trim: Running fstrim on /tmp/mnt/st5...
ssd_trim: fstrim succeeded: /tmp/mnt/st5: 1319407616 bytes trimmed
ssd_trim: Detected SSD candidate: label=sdb1 mount=/tmp/mnt/sdb1 part=/dev/sdb1 fs=ext4 vendor_id=1058 vendor_name=Western Digital
ssd_trim: Set 'unmap' (was 'full') -> /sys/block/sdb/device/scsi_disk/1:0:0:0/provisioning_mode
ssd_trim: Running fstrim on /tmp/mnt/sdb1...
ssd_trim: ERROR: fstrim failed on /tmp/mnt/sdb1: Remote I/O error
ssd_trim: WARNING: No usable write_same_max_bytes for /dev/sdb; disabling TRIM (nvram ssd_trim_1058_25a3_3256474458584150_discard_max_bytes=0)
kuchkovsky@rt:/tmp/home/root# /jffs/scripts/ssd/ssd_trim.sh
ssd_trim: Scanning /proc/mounts for USB-backed filesystems under /tmp/mnt
ssd_trim: Detected SSD candidate: label=st5 mount=/tmp/mnt/st5 part=/dev/sda1 fs=ext4 vendor_id=04e8 vendor_name=Samsung
ssd_trim: Running fstrim on /tmp/mnt/st5...
ssd_trim: fstrim succeeded: /tmp/mnt/st5: 0 bytes trimmed
ssd_trim: Skipping drive=/dev/sdb (disabled via nvram: ssd_trim_1058_25a3_3256474458584150_discard_max_bytes=0) for label=sdb1 mount=/tmp/mnt/sdb1 part=/dev/sdb1 fs=ext4 vendor_id=1058 vendor_name=Western Digital
ssd_trim: No SSD mountpoints required trimming

This behavior corresponds to the new script strategy of attempting to trim all possible devices and permanently excluding those that fail during the trim process even after trying a smaller discard_max_bytes value. You can also exclude specific devices in advance by specifying their labels in config.sh.
 
It will try to trim the drive, fail, mark it in nvram as disabled (by generating a unique drive ID), and skip it during all subsequent runs.
Code:
kuchkovsky@rt:/tmp/home/root# /jffs/scripts/ssd/ssd_trim.sh
ssd_trim: Scanning /proc/mounts for USB-backed filesystems under /tmp/mnt
ssd_trim: Detected SSD candidate: label=st5 mount=/tmp/mnt/st5 part=/dev/sda1 fs=ext4 vendor_id=04e8 vendor_name=Samsung
ssd_trim: Running fstrim on /tmp/mnt/st5...
ssd_trim: fstrim succeeded: /tmp/mnt/st5: 1319407616 bytes trimmed
ssd_trim: Detected SSD candidate: label=sdb1 mount=/tmp/mnt/sdb1 part=/dev/sdb1 fs=ext4 vendor_id=1058 vendor_name=Western Digital
ssd_trim: Set 'unmap' (was 'full') -> /sys/block/sdb/device/scsi_disk/1:0:0:0/provisioning_mode
ssd_trim: Running fstrim on /tmp/mnt/sdb1...
ssd_trim: ERROR: fstrim failed on /tmp/mnt/sdb1: Remote I/O error
ssd_trim: WARNING: No usable write_same_max_bytes for /dev/sdb; disabling TRIM (nvram ssd_trim_1058_25a3_3256474458584150_discard_max_bytes=0)
kuchkovsky@rt:/tmp/home/root# /jffs/scripts/ssd/ssd_trim.sh
ssd_trim: Scanning /proc/mounts for USB-backed filesystems under /tmp/mnt
ssd_trim: Detected SSD candidate: label=st5 mount=/tmp/mnt/st5 part=/dev/sda1 fs=ext4 vendor_id=04e8 vendor_name=Samsung
ssd_trim: Running fstrim on /tmp/mnt/st5...
ssd_trim: fstrim succeeded: /tmp/mnt/st5: 0 bytes trimmed
ssd_trim: Skipping drive=/dev/sdb (disabled via nvram: ssd_trim_1058_25a3_3256474458584150_discard_max_bytes=0) for label=sdb1 mount=/tmp/mnt/sdb1 part=/dev/sdb1 fs=ext4 vendor_id=1058 vendor_name=Western Digital
ssd_trim: No SSD mountpoints required trimming

This behavior corresponds to the new script strategy of attempting to trim all possible devices and permanently excluding those that fail during the trim process even after trying a smaller discard_max_bytes value. You can also exclude specific devices in advance by specifying their labels in config.sh.
Excellent work!! ;)
 
@XIII could you please try the updated version of the SSD TRIM script with your drive? It should now automatically tune the discard_max_bytes value and remember it for subsequent runs.
I created my own super simplified version based on your previous version and no longer run yours...

(but will still look into your changes to see what I can learn this time)
 
(but will still look into your changes to see what I can learn this time)
Would be nice if you could run the script to check that it's working correctly on the first try on your side. Theoretically, it should detect and apply the fix on the first run, and then reuse it on subsequent runs. This way, it trims the device properly without any additional configuration.
 
My unmap script has already written 33550336 to /sys/block/"${drive}"/queue/discard_max_bytes.

What would I need to do to "reset" this to properly test your new script?

(I probably keep using my scripts; one has a hardcoded vendor ID and the other this "magic number" 33550336, but the unmap script is then only 3 lines and the trim only 1 line; both +1 line for the shebang)

Also, what should I do after testing (I don't know when I can do this; depends on the answer to the question above) to restore the previous state? (You save data in NVRAM that should be deleted?)
 
What would I need to do to "reset" this to properly test your new script?
Run this command, based on your previous output in this thread:
Code:
echo 4294966784 > /sys/block/$YOUR_DRIVE/queue/discard_max_bytes

Also, what should I do after testing (I don't know when I can do this; depends on the answer to the question above) to restore the previous state? (You save data in NVRAM that should be deleted?)
The script will print the nvram key in the logs. For example:
Code:
ssd_trim: Skipping drive=/dev/sdb (disabled via nvram: ssd_trim_1058_25a3_3256474458584150_discard_max_bytes=0) for label=sdb1 mount=/tmp/mnt/sdb1 part=/dev/sdb1 fs=ext4 vendor_id=1058 vendor_name=Western Digital

So you can run the following commands to remove it:
Code:
nvram unset ssd_trim_<disk-id>_discard_max_bytes
nvram commit
And of course, you will need to rerun your line that sets the desired discard_max_bytes.

That's it.
 
latest version is giving me this errors:

Code:
/jffs/scripts/ssd# sh ssd_trim.sh
: not found: line 2:
: not found: line 58:
: not found: line 63:
ssd_trim.sh: set: line 67: illegal option -o pipefail
[/QUOTE]
[QUOTE]
: not found: line 68:

Do i need ot od osmthing other as copy ?
 
Code:
➜ echo 4294966784 > /sys/block/sda/queue/discard_max_bytes
echo: write error: invalid argument
Maybe your script hasn't applied unmap yet?
Code:
echo unmap > /sys/block/sda/device/scsi_disk/0:0:0:0/provisioning_mode
If it still fails after this command, then ideally you can temporarily comment out the existing hooks in pre-mount/services-start for your script, reboot the router, and have a fresh run of ssd_trim.sh. I know that rebooting is disruptive, so that's only if you're fine with that.

latest version is giving me this errors:
It looks like you uploaded broken code (possibly with some special characters). Did you use scp for that? You can run the following command from the repo folder to upload only the ssd-related scripts:
Code:
scp -O -r jffs/scripts/ssd/* router:/jffs/scripts/ssd/
Of course, you need to replace router with your actual router alias from ~/.ssh/config
 
Last edited:
after checking directly to router and copy all is working fine now
 
Maybe your script hasn't applied unmap yet?
Code:
echo unmap > /sys/block/sda/device/scsi_disk/0:0:0:0/provisioning_mode
If it still fails after this command, then ideally you can temporarily comment out the existing hooks in pre-mount/services-start for your script, reboot the router, and have a fresh run of ssd_trim.sh. I know that arebooting is disruptive, so that's only if you're fine with that.
I might try that at a later time. Will report back if/when I do.
 
I’ve spent a few days lurking and poking around and pondering adding what I want/need from here on my network/router, and if it’s as effective as the entware stuff, I believe I’ll take a stab at getting it running. Since I upgraded my service & router, I miss SLAAC from when I had a native v6 connection, and the firewall and blocking of the amtm addons. But first more time with the readme from the GitHub…
 
Dear friend, thanks for your work.

Within the wireguard port forwarding, what's the WG_CLIENT_IP='10.0.0.2' variable?
I think I'm failing to understand.

Also, after running, does this append to firewall-start and so on scripts? Or I have to manually put this to run on those used scripts?

Thank you.
 
Within the wireguard port forwarding, what's the WG_CLIENT_IP='10.0.0.2' variable?
It is the internal VPN IP address assigned to your router as a WireGuard client. New Merlin versions show a status field on the running WG client page in the format “Connected (Local: x.x.x.x - Public: y.y.y.y)”, and the x.x.x.x value corresponds to your WG_CLIENT_IP.

Currently, this script is designed for self-hosted setups - when you deploy your server on a VPS / cloud instance, create an internal network such as 10.0.0.0/24, and assign internal IPs from that range to your clients. You can then use this script to forward services from your LAN or router into that network so they are accessible to other VPN clients outside your home network, even if your ISP does not provide you with a public IP. For example, if your router has WG_CLIENT_IP='10.0.0.2', and you forward its Web UI on port 8080 and your Home Assistant on port 8088, they will be reachable within the VPN network at 10.0.0.2:8080 and 10.0.0.2:8088, respectively.

A known current limitation is that if you use a commercial VPN provider such as AirVPN that gives you a forwarded port and assigns you a random public IP with that port, you will not be able to forward it to a device on your network using this script. This is because it expects forwarded services to be accessed inside the VPN network using a private WG_CLIENT_IP, not a public address that the provider internally forwards to your client. I'm planning to address this limitation in a future version of the script - a universal VPN port forwarder that supports multiple VPN interfaces simultaneously, including both WireGuard and OpenVPN.

Also, after running, does this append to firewall-start and so on scripts?
No, the script does not automatically append itself to firewall-start or similar scripts. You will need to add it manually to nat-start, and also upload wgc_route.sh and add it to both wgclient-start and wgclient-stop. All related scripts and hooks are listed in the README section of the WG port forwarder. At the moment, there is no automated installer, so some manual setup is required. However, everything is documented, and I plan to add an installer once the core functionality is complete.
 
It is the internal VPN IP address assigned to your router as a WireGuard client. New Merlin versions show a status field on the running WG client page in the format “Connected (Local: x.x.x.x - Public: y.y.y.y)”, and the x.x.x.x value corresponds to your WG_CLIENT_IP.

Currently, this script is designed for self-hosted setups - when you deploy your server on a VPS / cloud instance, create an internal network such as 10.0.0.0/24, and assign internal IPs from that range to your clients. You can then use this script to forward services from your LAN or router into that network so they are accessible to other VPN clients outside your home network, even if your ISP does not provide you with a public IP. For example, if your router has WG_CLIENT_IP='10.0.0.2', and you forward its Web UI on port 8080 and your Home Assistant on port 8088, they will be reachable within the VPN network at 10.0.0.2:8080 and 10.0.0.2:8088, respectively.

A known current limitation is that if you use a commercial VPN provider such as AirVPN that gives you a forwarded port and assigns you a random public IP with that port, you will not be able to forward it to a device on your network using this script. This is because it expects forwarded services to be accessed inside the VPN network using a private WG_CLIENT_IP, not a public address that the provider internally forwards to your client. I'm planning to address this limitation in a future version of the script - a universal VPN port forwarder that supports multiple VPN interfaces simultaneously, including both WireGuard and OpenVPN.


No, the script does not automatically append itself to firewall-start or similar scripts. You will need to add it manually to nat-start, and also upload wgc_route.sh and add it to both wgclient-start and wgclient-stop. All related scripts and hooks are listed in the README section of the WG port forwarder. At the moment, there is no automated installer, so some manual setup is required. However, everything is documented, and I plan to add an installer once the core functionality is complete.
Well, right now I'm manually adding
iptables -t nat -A "my_chain" -i "wgc4" -p tcp --dport "11111" -j DNAT --to-destination "10.0.0.10:11111"
Where 10.0.0.10 is my client lan IP (not wireguard local ip), and I think is correctly working.

Why do we need VPN local ip?

Currently using AirVPN.
 
Why do we need VPN local ip?
To make the VPN client IP and forwarded ports accessible from the home network as well, the script creates rules like these:
Code:
-A PREROUTING -d 10.0.0.2/32 -j WGC1_VSERVER
-A WGC1_VSERVER -i wgc1 -j WGC1_RULES
-A WGC1_VSERVER -i br0 -j WGC1_RULES
-A WGC1_RULES -p tcp -m tcp --dport 8080 -j DNAT --to-destination 192.168.50.1:80
As you can see, it listens on this IP not only on wgc1 but also on br0.

wgc1 applies only to external traffic coming into your router. Without the corresponding br0 rule, 10.0.0.2 would not be reachable from your LAN devices because LAN traffic is bridged and does not pass through wgc1. Using an IP-based approach ensures that WG_CLIENT_IP is always reachable both inside and outside your LAN. This way, you can avoid clunky switching between 10.0.0.2 and 192.168.50.1, for example - you can always use the same IP.

This design also mirrors Asus's internal WAN port forwarding logic, which is also IP-based and uses a dedicated chain:
Code:
-A PREROUTING -d x.x.x.x/32 -j VSERVER
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!

Members online

Back
Top