I am currently running 378.50 on my rt-n16. I installed Entware so that I could use Strongswan (v5.2.1) to establish an ipsec vpn tunnel between my home and office. For the most part, this is working well, but I recently noticed that memory is being consumed in a quantity proportional to the amount of tunnel traffic and does not seem to ever be garbage collected. Based on my initial searching, I came across the following which appears to describe the same issue (user running Tomato, not Asuswrt-Merlin):
http://www.linksysinfo.org/index.php?threads/memory-leak-ipsec-strongswan.70929/
Based on that, I installed slabtop and confirmed that my case also involves secpath_cache consuming memory that doesn't seem to ever get released (I've seen it stay occupied for multiple days).
After a reboot, secpath_cache show a CACHE_SIZE of 0K, I'm not sure if it helps, but here is an example of the slabtop output after approximately 100 MB of data transferred over the tunnel:
Anyway, I apologize if this is not the ideal place to post about this issue, but I'm hoping that if that's the case, someone can point me in the right direction.
Thanks for any help.
http://www.linksysinfo.org/index.php?threads/memory-leak-ipsec-strongswan.70929/
Based on that, I installed slabtop and confirmed that my case also involves secpath_cache consuming memory that doesn't seem to ever get released (I've seen it stay occupied for multiple days).
After a reboot, secpath_cache show a CACHE_SIZE of 0K, I'm not sure if it helps, but here is an example of the slabtop output after approximately 100 MB of data transferred over the tunnel:
Code:
Active / Total Objects (% used) : 70635 / 75743 (93.3%)
Active / Total Slabs (% used) : 2364 / 2373 (99.6%)
Active / Total Caches (% used) : 86 / 128 (67.2%)
Active / Total Size (% used) : 9586.77K / 10112.02K (94.8%)
Minimum / Average / Maximum Object : 0.01K / 0.13K / 4096.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
40567 40514 99% 0.03K 359 113 1436K secpath_cache
8496 8455 99% 0.06K 144 59 576K buffer_head
4290 4261 99% 0.13K 143 30 572K size-32
2808 2766 98% 0.05K 36 78 144K sysfs_dir_cache
2418 2418 100% 0.12K 78 31 312K dentry
4120 2156 52% 0.09K 103 40 412K flow_cache
1380 1269 91% 0.08K 30 46 120K vm_area_struct
840 840 100% 0.13K 28 30 112K size-64
793 788 99% 0.29K 61 13 244K inode_cache
765 765 100% 0.22K 45 17 180K skbuff_head_cache
737 728 98% 4.00K 737 1 2948K size-4096
840 660 78% 0.16K 35 24 140K filp
507 501 98% 0.29K 39 13 156K radix_tree_node
678 470 69% 0.01K 2 339 8K anon_vma
480 393 81% 0.13K 16 30 64K size-128
374 369 98% 0.34K 34 11 136K squashfs_inode_cache
330 315 95% 0.25K 22 15 88K ip_dst_cache
312 312 100% 0.30K 26 12 104K proc_inode_cache
304 304 100% 0.50K 38 8 152K size-512
270 263 97% 0.39K 27 10 108K shmem_inode_cache
180 177 98% 0.42K 20 9 80K ext3_inode_cache
150 147 98% 0.13K 5 30 20K size-96
150 143 95% 0.25K 10 15 40K size-192
128 128 100% 1.00K 32 4 128K size-1024
160 127 79% 0.09K 4 40 16K kmem_cache
120 113 94% 0.25K 8 15 32K jffs2_refblock
Anyway, I apologize if this is not the ideal place to post about this issue, but I'm hoping that if that's the case, someone can point me in the right direction.
Thanks for any help.