What's new
  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Tutorial [Scripts][2025-11-01] IPSet-based Firewall & Tunnel Director, SSD Trimming, IPv6 Enhancements, WG Port Forwarder, and more

I have other values:
Code:
➜ cat /sys/block/sda/queue/discard_max_bytes

4294966784

➜ cat /sys/block/sda/queue/discard_granularity

4096
 
File systemp is ext4:
/dev/sda1 ext4 57431276 2203612 52280544 4% /tmp/mnt/SSD

cat /sys/block/sda/queue/discard_max_bytes
0
cat /sys/block/sda/queue/discard_granularity
0
The empty values for the queue mean that UNMAP was not properly applied to the device, even though the mode is set by the unmap script. I think it's either the enclosure or the SSD that isn't supported. I'll probably add an explicit error message for such cases to the script by checking these values.

What's odd is that it shows up as "VLI Manufacturer String" - it's just a generic name from the controller. The vendor should have given a specific name to the device to make it recognizable. What brand is your enclosure/SSD?

I have other values:
Code:
➜ cat /sys/block/sda/queue/discard_max_bytes

4294966784

➜ cat /sys/block/sda/queue/discard_granularity

4096
Your drive reports proper values. What file system are you using? ext4?

I tested TRIM on my second drive formatted as NTFS, and it didn't work (which was expected), but the error message was different:
Code:
ssd_trim: ERROR: fstrim: FITRIM: Inappropriate ioctl for device
 
Last edited:
According to the link below a value greater than zero for DISC-MAX would mean that TRIM might be enabled?
This is the same as reading discard_max_bytes directly from /sys/block, which we have already done.

Your case is quite interesting. The provisioning mode is successfully set to unmap, and you have discard_max_bytes > 0, but fstrim fails with FITRIM: Remote I/O error, which is different from FITRIM: Operation not supported. Basically, that means the discard command reaches the device, but the device or bridge fails while executing it. This looks like a buggy USB bridge (or firmware) that claims unmap but doesn't handle it correctly.

@hervon's case is different - the error message and discard_max_bytes = 0 simply indicate that TRIM is not supported.

@XIII I have one more idea for you to check - try trimming just 1 MB:
Code:
fstrim -v -o 4096 -l $((1024*1024)) /tmp/mnt/ssd
 
TRIM initially did not work for the same HW combo on my Raspberry Pi 4 B either, but using these instructions I got it to work on that platform:


Do we have similar tools for our ASUS routers?
What did you change to make it start working? Did you modify discard_max_bytes to a custom value? Based on the output of the fstrim command set to 1 MB, TRIM reaches your disk, but it probably doesn’t know the correct number of bytes (the default may be wrong) and fails.

Otherwise, the logic of my script is basically the same. It changes the provisioning mode and runs fstrim, similar to the "Enabling TRIM" section in the tutorial.
 
Last edited:
How to set up the SSD trim for multiple drives?
And when all scripts are uploaded to the router, they are all enabled on next reboot?
 
Last edited:
What did you change to make it start working? Did you modify discard_max_bytes to a custom value?
I first had to change the provisioning mode from full to unmap, but I indeed also had to modify discard_max_bytes to a non-default value. I had to set discard_max_bytes to a multiple of discard_granularity, as Tom suggested in the comments. Using a corrected version of his script, the value for my HW combo apparently is 33550336, but I don't know how to try/set it on the router as well.
 
How to set up the SSD trim for multiple drives?
Duplicate this pre-mount block per device for unmap, specifying a different per-drive SSD_VOLUME_LABEL each time:


For example:
Bash:
SSD_VOLUME_LABEL_1='ssd1'
SSD_VOLUME_LABEL_1='ssd2' 

if tune2fs -l "$1" 2>/dev/null |
    grep -q "Filesystem volume name: *$SSD_VOLUME_LABEL_1$";
then
    /jffs/scripts/ssd/ssd_unmap.sh # specify your Vendor ID here
fi

if tune2fs -l "$1" 2>/dev/null |
    grep -q "Filesystem volume name: *$SSD_VOLUME_LABEL_2$";
then
    /jffs/scripts/ssd/ssd_unmap.sh # specify your Vendor ID here
fi

Then add new lines to services-start per every device:


Bash:
cru a trim_ssd_1           "0 4 * * 0 /jffs/scripts/ssd/ssd_trim.sh ssd1"
cru a trim_ssd_2           "0 4 * * 0 /jffs/scripts/ssd/ssd_trim.sh ssd2"

These changes will be enabled on the next reboot, yes.

I will add easier automated handling for multiple devices in the next version of the script, but for now you can make these changes. It will work fine.
 
I first had to change the provisioning mode from full to unmap, but I indeed also had to modify discard_max_bytes to a non-default value. I had to set discard_max_bytes to a multiple of discard_granularity, as Tom suggested in the comments. Using a corrected version of his script, the value for my HW combo apparently is 33550336, but I don't know how to try/set it on the router as well.
Simply add this line to your ssd_unmap.sh:
Bash:
echo <YOUR_VALUE> > /sys/block/sda/queue/discard_max_bytes
Keep in mind that sda is not guaranteed if you have multiple USB devices attached, in that case you need to figure it out dynamically.

Try running these commands manually first and then running the trim script. It should work fine if these values are identical to the ones on your Raspberry.

If it works, can you please compare the custom values for both parameters that you set on your Raspberry Pi and the default values detected on Merlin, and post them here? If there's a clear algorithm for how to update them, I can implement the automatic correction in the next version of the script, along with easier handling of multiple disks.
 
Last edited:
Only needed to change /sys/block/sda/queue/discard_max_bytes and now it seems to work?! 🎉
Code:
➜ echo 33550336 > /sys/block/sda/queue/discard_max_bytes

➜ ./ssd_trim.sh ssd
ssd_trim: Running fstrim on /tmp/mnt/ssd...
ssd_trim: /opt/sbin/fstrim succeeded: /tmp/mnt/ssd: 243424014336 bytes trimmed

➜ ./ssd_trim.sh ssd
ssd_trim: Running fstrim on /tmp/mnt/ssd...
ssd_trim: /opt/sbin/fstrim succeeded: /tmp/mnt/ssd: 0 bytes trimmed
 
Only needed to change /sys/block/sda/queue/discard_max_bytes and now it seems to work?! 🎉
Yes, that's correct.

Can you please try trimming with this value, assuming your discard_granularity is 4096? I'm curious if it would work.

Bash:
echo 4294963200 > /sys/block/sda/queue/discard_max_bytes

Also, can you send me the following values from your disk?
Bash:
cat /sys/block/sda/queue/discard_max_hw_bytes
cat /sys/block/sda/queue/logical_block_size
cat /sys/block/sda/queue/write_same_max_bytes
 
Last edited:
Code:
➜ cat /sys/block/sda/queue/discard_max_hw_bytes
4294966784

➜ cat /sys/block/sda/queue/logical_block_size
512

➜ cat /sys/block/sda/queue/write_same_max_bytes
33550336

➜ echo 4294963200 > /sys/block/sda/queue/discard_max_bytes
➜ /jffs/scripts/ssd/ssd_trim.sh ssd
ssd_trim: Running fstrim on /tmp/mnt/ssd...
ssd_trim: ERROR: fstrim: FITRIM: Remote I/O error
 

2025-11-01 Update​


This release brings full IPv6 support to the IPSet-based WAN Firewall and extends IPv6 functionality in the Asus firmware in several areas.

1. WAN Firewall now supports IPv6 sets and includes two IPv6 blocklists enabled by default:
  • Team Cymru fullbogons - blocks unallocated and reserved IPv6 ranges (bogons).
  • Spamhaus DROPv6 - blocks confirmed malicious and abusive IPv6 networks.
You can add any custom IPv6 blocklists and define country-based blocking rules using GeoLite2 or IPdeny lists.

Example configuration:
Code:
CUSTOM_V6_IPSETS='
blk6:
  https://www.team-cymru.org/Services/Bogons/fullbogons-ipv6.txt
  https://www.spamhaus.org/drop/dropv6.txt

  # Uncomment the URLs below for enhanced protection;
  # Replace YOUR_API_KEY for AbuseIPDB
  # https://api.abuseipdb.com/api/v2/blacklist -d plaintext -d ipVersion=6 -H "Key: YOUR_API_KEY"
'

WAN_FW_V6_RULES='
# General IPv6 inbound blocking
block:any:any:blk6

# Block all incoming IPv6 web traffic from China (HTTP 2/3)
block:443:udp:cn6
block:80,443:tcp:cn6
'

2. IPv6 SLAAC and RA support for SDN networks - Asus firmware currently enables only stateful DHCPv6 for SDNs, not stateless autoconfiguration (the issue is discussed in detail here). When IPv6 is enabled for SDN interfaces in the GUI, it reconfigures all internal subnets from /64 to /72, which completely breaks SLAAC, since most client devices require a /64 prefix to self-generate global IPv6 addresses. These scripts automatically allocate proper /64 subnets from the delegated prefix and enable Router Advertisements (RA) for SLAAC.

IPv6 subnets are enabled by default for all SDNs to keep setup simple, but you can easily exclude specific ones if needed:
Code:
###################################################################################################
# 1. Excluded interfaces for SDN IPv6 auto-assignment
# -------------------------------------------------------------------------------------------------
# * Lists bridge interfaces that should be skipped when assigning IPv6 /64 subnets.
# * Useful if you want to keep certain SDNs IPv4-only, or manage their IPv6 configuration manually.
# * The main LAN (br0) is always excluded automatically, so it should not be listed here.
#
# Example:
#   EXCLUDED_IFACES='br54 br56'
###################################################################################################
EXCLUDED_IFACES=''

3. Static IPv6 routes to LAN hosts - lets you define and maintain static IPv6 routes by delegating an entire prefix (e.g., /64) to a single device. Handy for setups where a device like a Docker host or secondary router manages its own downstream subnet.

Config:
Code:
# -------------------------------------------------------------------------------------------------
# STATIC_ROUTE_RULES - one rule per line: iface|link_local|static_route
# -------------------------------------------------------------------------------------------------
# Syntax (no spaces; comments allowed with "#", blank lines ignored):
#   <iface>|<link_local>|<static_route>
#
# Fields:
#   iface        - router interface that reaches the target host (e.g., br0, br52).
#   link_local   - link-local address of the target host on that interface (must be fe80::/10).
#                  Do NOT append "/64" here - it's a single address, not a prefix.
#   static_route - IPv6 subnet to be routed to that host (typically a /64 carved from your /56).
#
# Tips:
#   * Find the host's link-local address on the host itself:
#         ip -6 addr show dev <iface>
#   * Multiple rules are supported; each will be validated and applied independently.
#   * Example carving /64 subnets from 2a10:abcd:1234:aa00::/56:
#         br0|fe80::1234:56ff:fe78:9abc|2a10:abcd:1234:aa10::/64
#         br0|fe80::1234:56ff:fe78:9abc|2a10:abcd:1234:aa20::/64
#
# Note:
#   Avoid using the first few subnets from your delegated prefix if you also use the
#   sdn_v6_br.sh script for SDN IPv6 configuration. That script automatically assigns
#   the earliest /64s (starting from index 0) to SDN bridges, so using the same ranges
#   for static routes may cause prefix conflicts.
# -------------------------------------------------------------------------------------------------
STATIC_ROUTE_RULES='
'
 
Last edited:

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Back
Top