What's new

Out of Memory Errors 388.1 / No space left on device [SOLUTIONS]

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

I am having similar issue. But not sure if it's related to RAM. it's just the Web GUI is really slow when I login. specially on the AImesh page and the network map page. tested offical firmware 388.1 388.2 388.1 Merlin 388.1 ROG. 388.1 Merlin alpha

setup1 1 x AX11000 with less than 20 clients up for over a month on the 388.1 ROG Merlin firmware with VPN director on no problem at all. everything is smooth.

setup 2 1x AX11000 2x xt12 about 50 clients. wifi and everything works fine no slow down no disconnects. it's just that login in the WEB GUI to change any settings is very slow. specially on the AImesh page and network page where it can't display clients most of the time. The clients are indeed connected and working fine.

this is clean setup. even on the offical firmware 388.2 the GUI still feels slow sometimes.
setup 3 1x AXE16000 2x xt12 about 10 clients or less mostly are lan. Same issue it was working fine for few mins and then the aimesh page started to lag and not displaying clients.

I think I will try the older firmware 386 Merlin.

AXE16000 + 2 xt12 nodes the AXE16000 have 2GB of DDR4 not sure why it still lags. must be a bug.

Total2001.40 MB
Free776.75 MB
Buffers14.60 MB
Cache50.15 MB
SwapNo swap configured

Internal Storage
NVRAM usage146431 / 196608 bytes
JFFS2.06 / 44.46 MB
 
if there is a memory leak issue, most likely it will be felt quicker in devices with less RAM. So, in devices like AXE16000 should take longer to manifest, if present.
Nevertheless, @rtm not knowing how your axe16000 is configured and what type of services are being run, it seems to me that out of 2GB RAM, having only 776MB free is a bit weird... In my tuf-ax5400 (512mb RAM), stock with AiProtection on, no usb plugged in, and a ax55 node as mesh, I have about 100mb free with 386. I would assume in your case, give or take a few changes given a different platform, and mesh nodes, no more than 500mb used RAM would be expected. Your axe is using about 1.2GB... But I speculate.
In my case, problems only started to be apparent with 388 when available free RAM started approaching 40mb (I think that at a given time I saw it hit about 30mb).
 
if there is a memory leak issue, most likely it will be felt quicker in devices with less RAM. So, in devices like AXE16000 should take longer to manifest, if present.
Nevertheless, @rtm not knowing how your axe16000 is configured and what type of services are being run, it seems to me that out of 2GB RAM, having only 776MB free is a bit weird... In my tuf-ax5400 (512mb RAM), stock with AiProtection on, no usb plugged in, and a ax55 node as mesh, I have about 100mb free with 386. I would assume in your case, give or take a few changes given a different platform, and mesh nodes, no more than 500mb used RAM would be expected. Your axe is using about 1.2GB... But I speculate.
In my case, problems only started to be apparent with 388 when available free RAM started approaching 40mb (I think that at a given time I saw it hit about 30mb).
I am currently running 386.8 Merling. after 3hrs and 30min the AImesh and network map are all loading fast. the GUI feels loading a lot faster than 388.1 Merlin or offical 388.1 388.2.

1x AXE16000 2x XT12 nodes. the 2 xt12 nodes are on latest 388.2 firmware offical. There is this yellow warning saying a new version of firmware is no available. "your Router is running low on free NvRam. Which might affect its stability review long parameter list (like DHCP reservations), or consider doing a factory default reset and reconfiguring.

I've only about 8 manually assigned IP in the DHCP list. Not sure why it's saying nvRam low.

on 388.1 Merlin the aimesh is stable and speed are fast. now some how my aimesh is not stable and I can't seem to get guest network working on the nodes. I might have to change back to 388.1 firmware and just put up with the slow Web GUI


Total2001.46 MB
Free971.83 MB
Buffers14.41 MB
Cache54.80 MB
SwapNo swap configured

Internal Storage
 
@jtp10181 - That is a pain if you are getting device disconnections. I think it might be a different issue that the memory related one - potentially wifi drivers used on 388.1. Why do you not go back to 386.5_2? This is a very stable FW for AX86U

I was working on my NAS and AdGuard servers yesterday and accidentally unplugged my router so the memory/reboot clock has been reset so I have no new information to report.
I have also the 386.5_2 on mu RT-AX86s and it's rock solid now for more than a week!
 
Well this was working well, internet was a little flaky this AM and found the web UI totally locked up and could SHH log in but not run anything. After a few log ins I got this:
1674050962615.png


Wifi is still working, just can tell its not 100% well.
Going to pull power and reboot, see if anything is in the logs.
 
That does not look great. Hope you find something helpful in the log.

my ax86s is running well on 388.1 at the moment. I have not made any changes apart from a node FW upgrade. After 9 days no random reboot and memory use at around 85%

I upgraded my 2 ax58u nodes to 388 stock.

will keep monitoring and post any new info!
 
I'm having similar issues with my RT-AX68U. Clean install of 388.1. Will try the cron jobs to kill sysstate and networkmap:

Code:
Jan 18 16:45:31 dnsmasq[22638]: failed to allocate 66573 bytes
Jan 18 16:45:31 dnsmasq[22651]: failed to allocate 66573 bytes
Jan 18 16:45:31 dnsmasq[22655]: failed to allocate 66573 bytes
Jan 18 16:45:31 dnsmasq[22653]: failed to allocate 66573 bytes
Jan 18 16:45:31 dnsmasq[22654]: failed to allocate 66573 bytes
Jan 18 16:52:33 dnsmasq[23835]: failed to allocate 66573 bytes
Jan 18 16:52:33 dnsmasq[23836]: failed to allocate 66573 bytes
Jan 18 16:52:33 dnsmasq[23839]: failed to allocate 66573 bytes
Jan 18 17:02:32 dnsmasq[25524]: failed to allocate 66573 bytes
Jan 18 17:02:32 dnsmasq[25526]: failed to allocate 66573 bytes
Jan 18 17:02:32 dnsmasq[25528]: failed to allocate 66573 bytes
 
Had PC issues today finally was able to go back and get screenshots of the logs.
EDIT: Just realized connmon was a custom app/script, so going to just disable that for now.

Looks like this watching / BONDING stuff was the start of the nasty stuff
1674092517748.png


Going back a while it seems to start around here which was preceded by this connmon warning multiple times before the watchdog thing kicked in and then spiraled out of control eventually.
1674092765920.png


Which eventually lead to this for about 6 hours straight (all night)
1674092321479.png


Then this shortly before I pulled the plug:
1674092374959.png
 
Last edited:
This does not look great and above my pay grade to be honest :)

You have something very weird going on and the log entries (all night) appear to be related to having no file space - and this could be that a log/error file has gotten very large and used all the free space on the file system partition. Or an indication of the file system being corrupt.

If I were you, I would go nuclear and do a full reset / wipe your jffs partition and manually setup entire network including any nodes (if you have them). But do as few changes as possible from default config. That is what I have done and I am running like this now. No USB, no swap, no entware, no scripts...

I think it will be quicker to do this than trying to troubleshoot the issue. I would go as far as re-flashing merlin 388.1 as part of the process so you know you are really starting from scratch.

I have a default setup apart from setting wifi to fixed channels and I disable explicit beam-forming (on both 2.4 and 5ghz). Then enable cake QOS.

Give that a try and see how you go over the next couple of weeks.
 
So this should work for anyone, insert this into /jffs/init-start (then run the commands manually or reboot to apply it)
Code:
# Fix for networkmap and Restart processes daily
mkdir -p /var/lock/usr/networkmap
cru a kill_sysState "14 2 * * * killall sysstate"
cru a kill_NetworkMap "18 2 * * * killall networkmap"

I'm getting these errors when the killall networkmap runs. Anything I should do?

Code:
Jan 19 02:18:00 networkmap: Error unlocking 12: 9 Bad file descriptor
Jan 19 02:18:00 networkmap: Error unlocking 0: 9 Bad file descriptor

This is new as well:
Code:
Jan 19 05:51:09 dnsmasq-script[9037]: json_object_from_file: error opening file /jffs/nmp_vc_json.js: No such file or directory
 
Last edited:
I'm getting these errors when the killall networkmap runs. Anything I should do?

Code:
Jan 19 02:18:00 networkmap: Error unlocking 12: 9 Bad file descriptor
Jan 19 02:18:00 networkmap: Error unlocking 0: 9 Bad file descriptor

This is new as well:
Code:
Jan 19 05:51:09 dnsmasq-script[9037]: json_object_from_file: error opening file /jffs/nmp_vc_json.js: No such file or directory
They are all normal messages when you kill networkmap.
 
If I were you, I would go nuclear and do a full reset / wipe your jffs partition and manually setup entire network including any nodes
There are reports of the networkmap mem issues happening even after a full reset. I may try and do this eventually anyway for good measure, just need to make sure I backup at least the custom device names and static IP lists. I know there are posts about it already on what to backup.

Thinking based on a tip from Merlin before that the /tmp or /var is filling up somehow. Going to put this in a script and have it run every hour to log the dir sizes. If its slowly filling up I should be able to spot it after a day or two.

Code:
df -h | grep -P '/(var|tmp)$' | logger
 
Last edited:
There are reports of the networkmap mem issues happening even after a full reset. I may try and do this eventually anyway for good measure, just need to make sure I backup at least the custom device names and static IP lists. I know there are posts about it already on what to backup.

Thinking based on a tip from Merlin before that the /tmp or /var is filling up somehow. Going to put this in a script and have it run every hour to log the dir sizes. If its slowly filling up I should be able to spot it after a day or two.

Code:
df -h | grep -P '/(var|tmp)$' | logger
The built-in grep command doesn't support -P, use -E.
 
Following some tips above from @jtp10181, @ColinTaylor and @RMerlin, I started looking for clues about this memory leak which seems to be occurring on my RT-AX68U since I upgraded to 388.1.

So, I have a couple of cron jobs running everyday around 2:15am which kills networkmap and sysstate.

Code:
14 2 * * * killall sysstate #kill_sysState#
18 2 * * * killall networkmap #kill_NetworkMap#

I also have a cron job running every hour at 31 minutes, logging networkmap's VmRSS as below. You can notice the memory usage constantly increasing and finally dropping when networkmap is killed, just before the measurement at 2:30am. If networkmap was not being killed, the router would run out of memory and start crashing.

Code:
31 * * * * cat /proc/$(pidof networkmap)/status | grep -E -w 'Pid|Name|VmRSS' | logger #Mem#

Code:
Jan 19 16:31:00 VmRSS:     9272 kB
Jan 19 17:31:00 VmRSS:     9520 kB
Jan 19 18:31:00 VmRSS:     9556 kB
Jan 19 19:31:00 VmRSS:     9704 kB
Jan 19 20:31:00 VmRSS:     9844 kB
Jan 19 21:31:00 VmRSS:     9992 kB
Jan 19 22:31:00 VmRSS:    10136 kB
Jan 19 23:31:00 VmRSS:    10280 kB
Jan 20  0:31:00 VmRSS:    10532 kB
Jan 20  1:31:00 VmRSS:    10576 kB
Jan 20  2:31:00 VmRSS:     7564 kB
Jan 20  3:31:00 VmRSS:     7452 kB
Jan 20  4:31:00 VmRSS:     7568 kB
Jan 20  5:31:00 VmRSS:     7712 kB
Jan 20  6:31:00 VmRSS:     7844 kB
Jan 20  7:31:00 VmRSS:     7984 kB
Jan 20  8:31:00 VmRSS:     8128 kB
Jan 20  9:31:00 VmRSS:     8380 kB
Jan 20 10:31:00 VmRSS:     8520 kB
Jan 20 11:31:00 VmRSS:     8556 kB
Jan 20 12:31:00 VmRSS:     8696 kB
Jan 20 13:31:00 VmRSS:     8840 kB
Jan 20 14:31:00 VmRSS:     8972 kB
Jan 20 15:31:00 VmRSS:     9108 kB
Jan 20 16:31:00 VmRSS:     9248 kB
Jan 20 17:31:00 VmRSS:     9492 kB
Jan 20 18:31:00 VmRSS:     9632 kB
Jan 20 19:31:00 VmRSS:     9772 kB
Jan 20 20:31:00 VmRSS:     9912 kB
Jan 20 21:31:00 VmRSS:    10052 kB
Jan 20 22:31:00 VmRSS:    10192 kB
Jan 20 23:31:00 VmRSS:    10228 kB
Jan 21  0:31:00 VmRSS:    10364 kB
Jan 21  1:31:00 VmRSS:    10508 kB
Jan 21  2:31:00 VmRSS:     7616 kB
Jan 21  3:31:00 VmRSS:     7504 kB
Jan 21  4:31:00 VmRSS:     7604 kB
Jan 21  5:31:00 VmRSS:     7732 kB
Jan 21  6:31:00 VmRSS:     7852 kB
Jan 21  7:31:00 VmRSS:     8088 kB
Jan 21  8:31:00 VmRSS:     8220 kB
Jan 21  9:31:00 VmRSS:     8352 kB
Jan 21 10:31:00 VmRSS:     8484 kB
Jan 21 11:31:00 VmRSS:     8508 kB
Jan 21 12:31:00 VmRSS:     8644 kB
Jan 21 13:31:01 VmRSS:     8788 kB
Jan 21 14:31:00 VmRSS:     8916 kB
Jan 21 15:31:00 VmRSS:     9040 kB
Jan 21 16:31:00 VmRSS:     9184 kB
Jan 21 17:31:00 VmRSS:     9312 kB
Jan 21 18:31:00 VmRSS:     9452 kB
Jan 21 19:31:00 VmRSS:     9588 kB
Jan 21 20:31:00 VmRSS:     9736 kB
Jan 21 21:31:00 VmRSS:     9876 kB
Jan 21 22:31:00 VmRSS:    10016 kB
Jan 21 23:31:00 VmRSS:    10156 kB
Jan 22  0:31:00 VmRSS:    10288 kB
Jan 22  1:31:00 VmRSS:    10432 kB
Jan 22  2:31:00 VmRSS:     7556 kB
Jan 22  3:31:00 VmRSS:     7444 kB
Jan 22  4:31:00 VmRSS:     7552 kB
Jan 22  5:31:00 VmRSS:     7680 kB
Jan 22  6:31:00 VmRSS:     7816 kB
Jan 22  7:31:00 VmRSS:     7948 kB
Jan 22  8:31:00 VmRSS:     8088 kB
Jan 22  9:31:00 VmRSS:     8212 kB
Jan 22 10:31:00 VmRSS:     8352 kB
Jan 22 11:31:00 VmRSS:     8480 kB
Jan 22 12:31:00 VmRSS:     8616 kB
Jan 22 13:31:00 VmRSS:     8748 kB
Jan 22 14:31:00 VmRSS:     8868 kB
Jan 22 15:31:00 VmRSS:     9020 kB
Jan 22 16:31:00 VmRSS:     9160 kB
Jan 22 17:31:00 VmRSS:     9404 kB
Jan 22 18:31:00 VmRSS:     9440 kB
Jan 22 19:31:00 VmRSS:     9684 kB
Jan 22 20:31:00 VmRSS:     9824 kB
Jan 22 21:31:00 VmRSS:     9860 kB
Jan 22 22:31:00 VmRSS:     9996 kB
Jan 22 23:31:00 VmRSS:    10136 kB
Jan 23  0:31:00 VmRSS:    10276 kB
Jan 23  1:31:00 VmRSS:    10416 kB
Jan 23  2:31:00 VmRSS:     7628 kB
Jan 23  3:31:00 VmRSS:     7520 kB
Jan 23  4:31:00 VmRSS:     7612 kB
Jan 23  5:31:00 VmRSS:     7848 kB
Jan 23  6:31:00 VmRSS:     7976 kB
Jan 23  7:31:00 VmRSS:     8000 kB
Jan 23  8:31:00 VmRSS:     8140 kB
Jan 23  9:31:00 VmRSS:     8380 kB
Jan 23 10:31:00 VmRSS:     8412 kB
Jan 23 11:31:00 VmRSS:     8652 kB
Jan 23 12:31:00 VmRSS:     8788 kB
Jan 23 13:31:00 VmRSS:     8812 kB

continues...
 
Last edited:
... continued

I am also logging usage of /tmp, which is also constantly increasing. And this one I don't know how to reset.

Code:
30 * * * * df -h | grep -E '/(var|tmp)$' | logger #tmpfs#

Code:
Jan 19 17:30:00 tmpfs                   206.0M     18.6M    187.4M   9% /tmp
Jan 19 18:30:00 tmpfs                   206.0M     18.7M    187.3M   9% /tmp
Jan 19 19:30:00 tmpfs                   206.0M     18.8M    187.2M   9% /tmp
Jan 19 20:30:00 tmpfs                   206.0M     18.9M    187.1M   9% /tmp
Jan 19 21:30:00 tmpfs                   206.0M     18.9M    187.0M   9% /tmp
Jan 19 22:30:00 tmpfs                   206.0M     19.0M    186.9M   9% /tmp
Jan 19 23:30:00 tmpfs                   206.0M     19.1M    186.9M   9% /tmp
Jan 20  0:30:00 tmpfs                   206.0M     19.2M    186.8M   9% /tmp
Jan 20  1:30:00 tmpfs                   206.0M     19.3M    186.7M   9% /tmp
Jan 20  2:30:00 tmpfs                   206.0M     19.4M    186.6M   9% /tmp
Jan 20  3:30:00 tmpfs                   206.0M     19.5M    186.5M   9% /tmp
Jan 20  4:30:00 tmpfs                   206.0M     19.5M    186.4M   9% /tmp
Jan 20  5:30:00 tmpfs                   206.0M     19.6M    186.4M  10% /tmp
Jan 20  6:30:00 tmpfs                   206.0M     19.7M    186.3M  10% /tmp
Jan 20  7:30:00 tmpfs                   206.0M     19.7M    186.3M  10% /tmp
Jan 20  8:30:00 tmpfs                   206.0M     19.8M    186.2M  10% /tmp
Jan 20  9:30:00 tmpfs                   206.0M     19.9M    186.1M  10% /tmp
Jan 20 10:30:00 tmpfs                   206.0M     20.0M    186.0M  10% /tmp
Jan 20 11:30:00 tmpfs                   206.0M     20.1M    185.9M  10% /tmp
Jan 20 12:30:00 tmpfs                   206.0M     20.2M    185.8M  10% /tmp
Jan 20 13:30:00 tmpfs                   206.0M     20.2M    185.7M  10% /tmp
Jan 20 14:30:00 tmpfs                   206.0M     20.3M    185.7M  10% /tmp
Jan 20 15:30:00 tmpfs                   206.0M     20.4M    185.6M  10% /tmp
Jan 20 16:30:00 tmpfs                   206.0M     20.5M    185.5M  10% /tmp
Jan 20 17:30:00 tmpfs                   206.0M     20.6M    185.4M  10% /tmp
Jan 20 18:30:00 tmpfs                   206.0M     20.7M    185.3M  10% /tmp
Jan 20 19:30:00 tmpfs                   206.0M     20.8M    185.2M  10% /tmp
Jan 20 20:30:00 tmpfs                   206.0M     20.8M    185.1M  10% /tmp
Jan 20 21:30:00 tmpfs                   206.0M     21.0M    185.0M  10% /tmp
Jan 20 22:30:00 tmpfs                   206.0M     21.0M    184.9M  10% /tmp
Jan 20 23:30:00 tmpfs                   206.0M     21.1M    184.9M  10% /tmp
Jan 21  0:30:00 tmpfs                   206.0M     21.2M    184.8M  10% /tmp
Jan 21  1:30:00 tmpfs                   206.0M     21.3M    184.7M  10% /tmp
Jan 21  2:30:00 tmpfs                   206.0M     21.4M    184.6M  10% /tmp
Jan 21  3:30:00 tmpfs                   206.0M     21.5M    184.5M  10% /tmp
Jan 21  4:30:00 tmpfs                   206.0M     21.5M    184.4M  10% /tmp
Jan 21  5:30:00 tmpfs                   206.0M     21.6M    184.4M  10% /tmp
Jan 21  6:30:00 tmpfs                   206.0M     21.7M    184.3M  11% /tmp
Jan 21  7:30:00 tmpfs                   206.0M     21.8M    184.2M  11% /tmp
Jan 21  8:30:00 tmpfs                   206.0M     21.9M    184.1M  11% /tmp
Jan 21  9:30:00 tmpfs                   206.0M     22.0M    184.0M  11% /tmp
Jan 21 10:30:00 tmpfs                   206.0M     22.1M    183.9M  11% /tmp
Jan 21 11:30:00 tmpfs                   206.0M     22.2M    183.8M  11% /tmp
Jan 21 12:30:00 tmpfs                   206.0M     22.2M    183.8M  11% /tmp
Jan 21 13:30:01 tmpfs                   206.0M     22.3M    183.7M  11% /tmp
Jan 21 14:30:00 tmpfs                   206.0M     22.4M    183.6M  11% /tmp
Jan 21 15:30:00 tmpfs                   206.0M     22.5M    183.5M  11% /tmp
Jan 21 16:30:00 tmpfs                   206.0M     22.6M    183.4M  11% /tmp
Jan 21 17:30:00 tmpfs                   206.0M     22.7M    183.3M  11% /tmp
Jan 21 18:30:00 tmpfs                   206.0M     22.7M    183.2M  11% /tmp
Jan 21 19:30:00 tmpfs                   206.0M     22.8M    183.2M  11% /tmp
Jan 21 20:30:00 tmpfs                   206.0M     22.9M    183.1M  11% /tmp
Jan 21 21:30:00 tmpfs                   206.0M     23.0M    183.0M  11% /tmp
Jan 21 22:30:00 tmpfs                   206.0M     23.1M    182.9M  11% /tmp
Jan 21 23:30:00 tmpfs                   206.0M     23.2M    182.8M  11% /tmp
Jan 22  0:30:00 tmpfs                   206.0M     23.3M    182.7M  11% /tmp
Jan 22  1:30:00 tmpfs                   206.0M     23.3M    182.6M  11% /tmp
Jan 22  2:30:00 tmpfs                   206.0M     23.4M    182.6M  11% /tmp
Jan 22  3:30:00 tmpfs                   206.0M     23.5M    182.5M  11% /tmp
Jan 22  4:30:00 tmpfs                   206.0M     23.6M    182.4M  11% /tmp
Jan 22  5:30:00 tmpfs                   206.0M     23.7M    182.3M  12% /tmp
Jan 22  6:30:00 tmpfs                   206.0M     23.8M    182.2M  12% /tmp
Jan 22  7:30:00 tmpfs                   206.0M     23.9M    182.1M  12% /tmp
Jan 22  8:30:00 tmpfs                   206.0M     23.9M    182.0M  12% /tmp
Jan 22  9:30:00 tmpfs                   206.0M     24.0M    182.0M  12% /tmp
Jan 22 10:30:00 tmpfs                   206.0M     24.1M    181.9M  12% /tmp
Jan 22 11:30:00 tmpfs                   206.0M     24.2M    181.8M  12% /tmp
Jan 22 12:30:00 tmpfs                   206.0M     24.3M    181.7M  12% /tmp
Jan 22 13:30:00 tmpfs                   206.0M     24.4M    181.6M  12% /tmp
Jan 22 14:30:00 tmpfs                   206.0M     24.5M    181.5M  12% /tmp
Jan 22 15:30:00 tmpfs                   206.0M     24.6M    181.4M  12% /tmp
Jan 22 16:30:00 tmpfs                   206.0M     24.6M    181.4M  12% /tmp
Jan 22 17:30:00 tmpfs                   206.0M     24.7M    181.3M  12% /tmp
Jan 22 18:30:00 tmpfs                   206.0M     24.8M    181.2M  12% /tmp
Jan 22 19:30:00 tmpfs                   206.0M     24.9M    181.1M  12% /tmp
Jan 22 20:30:00 tmpfs                   206.0M     25.0M    181.0M  12% /tmp
Jan 22 21:30:00 tmpfs                   206.0M     25.1M    180.9M  12% /tmp
Jan 22 22:30:00 tmpfs                   206.0M     25.2M    180.8M  12% /tmp
Jan 22 23:30:00 tmpfs                   206.0M     25.2M    180.7M  12% /tmp
Jan 23  0:30:00 tmpfs                   206.0M     25.3M    180.7M  12% /tmp
Jan 23  1:30:00 tmpfs                   206.0M     25.4M    180.6M  12% /tmp
Jan 23  2:30:00 tmpfs                   206.0M     25.5M    180.5M  12% /tmp
Jan 23  3:30:00 tmpfs                   206.0M     25.6M    180.4M  12% /tmp
Jan 23  4:30:00 tmpfs                   206.0M     25.7M    180.3M  12% /tmp
Jan 23  5:30:00 tmpfs                   206.0M     25.7M    180.2M  12% /tmp
Jan 23  6:30:00 tmpfs                   206.0M     25.8M    180.2M  13% /tmp
Jan 23  7:30:00 tmpfs                   206.0M     25.9M    180.1M  13% /tmp
Jan 23  8:30:00 tmpfs                   206.0M     26.0M    180.0M  13% /tmp
Jan 23  9:30:00 tmpfs                   206.0M     26.1M    179.9M  13% /tmp
Jan 23 10:30:00 tmpfs                   206.0M     26.2M    179.8M  13% /tmp
Jan 23 11:30:00 tmpfs                   206.0M     26.2M    179.8M  13% /tmp
Jan 23 12:30:00 tmpfs                   206.0M     26.3M    179.7M  13% /tmp
Jan 23 13:30:00 tmpfs                   206.0M     26.4M    179.6M  13% /tmp

What could I do, besides reverting to 386.7_2?
 
Last edited:
I've just rebooted my router and nodes in order to compare with the previous state posted above:

Code:
???@ROUTER:/jffs/scripts# cat /proc/$(pidof networkmap)/status | grep -E -w 'Pid|Name|VmRSS'
Name:   networkmap
Pid:    1206
VmRSS:      7656 kB
???@ROUTER:/jffs/scripts# df -h | grep -E '/(var|tmp)$'
tmpfs                   206.0M    240.0K    205.8M   0% /var
tmpfs                   206.0M      2.0M    204.0M   1% /tmp

Networkmap's VmRSS resumed at the same level as when the process is killed overnight. But /tmp is much lower at 2M, compared with 26.4M as of before.

Any ideas?
 
But /tmp is much lower at 2M, compared with 26.4M as of before.

Any ideas?

Yes /tmp gets cleared and rebuilt when you reboot, so that is expected.
After tmp starts growing you could run this command to try and track down what folder/file is the problem.
Yours seems to be much worse than mine, my /tmp is not growing very fast.

Code:
cd /tmp
du -hx -d 1

OR
Code:
du -hx -d 1 /tmp
 
After tmp starts growing you could run this command to try and track down what folder/file is the problem.

Code:
cd /tmp
du -hx -d 1

For the records:
Code:
8.0K    ./dm
880.0K  ./bwdpi
8.0K    ./err_rules
4.0K    ./asusdebuglog
4.0K    ./ppp
332.0K  ./.diag
0       ./asusfbsvcs
24.0K   ./db
8.0K    ./avahi
12.0K   ./asdfile
0       ./netool
16.0K   ./nc
0       ./inadyn.cache
0       ./share
0       ./notify
0       ./mnt
4.0K    ./home
0       ./var
332.0K  ./etc
0       ./confmtd
1.9M    .

I'll try it again after a few days.
 
Code:
du -hx -d 1 /tmp
One can also use the following command to sort the output (the KB values) in descending order and filter out those with ZERO size:
Bash:
du -xk -d 1 /tmp | sort -nrt ' ' -k 1 | grep -v "^0"

This way you can easily see the entry with the largest size at the top of the list.
 
hello, I have exactly the same problem on my AXE11000 in 388.1, the RAM becomes full after 9 days.

Here for me:
55040 /tmp
52992 /tmp/.diag
876 /tmp/bwdpi
376 /tmp/etc
24 /tmp/nc
24 /tmp/db
16 /tmp/dm
12 /tmp/asdfile
8 /tmp/avahi
4 /tmp/home
4 /tmp/asusdebuglog

What does that mean?
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top