What's new

BACKUPMON BACKUPMON v1.5.10 -Mar 1, 2024- Backup/Restore your Router: JFFS + NVRAM + External USB Drive! (**Thread closed due to age**)

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Based upon add-on updates and additions, I have updated my BACKUPMON exclusions file:
Code:
myswap.swp
entware/var/log/*
skynet/skynet.log
entware/var/lib/unbound/unbound.log
entware/share/uiDivStats.d/*.db
*.tar
*.gz
It appears that the uiDivStats database file can grow rather large, causing backups to be unnecessarily large and time consuming! :eek:
 
New version 1.44 released tonight with an emphasis on defining error exit codes to help provide a solid integration between @ExtremeFiretop and @Martinski's MerlinAU Autoupdate Script. Before MerlinAU automatically flashes the newest firmware to your router, it will run BACKUPMON (if present) and determine if a successful backup was taken immediately prior. Thanks to both of these guys for their tremendous efforts making this all happen. It's been a pleasure working together!

What's new?
v1.44 - (January 28, 2024)
- ADDED:
More error checking added on creation of TAR files to determine if the script needs to force an exit due to some possible colossal disk or network issue when writing TAR files. Changed many other error exit codes to produce something usable when looking for script success or failure when running a backup from command line (ie. the -backup switch). Honored to collaborate with @ExtremeFiretop and @Martinski with their MerlinAU script, so BACKUPMON may run and check for a successful backup prior to running an automated FW update.
- FIXED: In the setup/config, when choosing USB or Network as your target media type (option #2), it will blank out your backup target mount point (option #6), and give you a visual indication that "ACTION IS NEEDED" for this item. In certain situations, the source and target mount points could be the same name, and if the media type had "network" as the default media type, while the mount points were pointing to an External USB drive, then the USB drive would get unmounted after a backup. Many thanks to @ExtremeFiretop for finding that doozy! :)

Download link (or update directly from within AMTM):
Code:
curl --retry 3 "https://raw.githubusercontent.com/ViktorJp/BACKUPMON/master/backupmon-1.44.sh" -o "/jffs/scripts/backupmon.sh" && chmod 755 "/jffs/scripts/backupmon.sh"

Significant screenshots:

As shown below, in certain situations, a new Backup Target Mount Point is requested to eliminate possibly unmounting your External USB drive after a backup completes.
1706500122669.png
 
Last edited:
Based upon add-on updates and additions, I have updated my BACKUPMON exclusions file:
Code:
myswap.swp
entware/var/log/*
skynet/skynet.log
entware/var/lib/unbound/unbound.log
entware/share/uiDivStats.d/*.db
*.tar
*.gz
It appears that the uiDivStats database file can grow rather large, causing backups to be unnecessarily large and time consuming! :eek:

Thanks @visortgw! I've come to rely on your exclusion files! :)
 
Thanks to both of these guys for their tremendous efforts making this all happen. It's been a pleasure working together!

Was an absolutely pleasure and looking forwards to keeping it up in the future!
 
Just for understanding, in addition to the External Drive and JFFS folder structures, you are looking for singular folders for specific scripts, like YazDHCP? So a compressed folder that just contains this script and it's settings folders and whatnot? How did nsru do this? Do you have any examples you might be able to share to understand these requirements a bit more please?
When I ran nsru, I would also separately copy the /opt/var/lib directory, which contains the historical data for Chrony, Unbound, and VnStat. I would also separately copy /jffs/addons/YazDHCP.d which contains the data for YazDHCP. So when I reinstalled any of those scripts, or changed the USB drive, or needed to do a nuclear reset of the router to install a recent f/w version from an old version, I could restore that dir, and my old data for those scripts would be restored. Same for the YazDHCP dir. Copying that folder made my DHCP assignments so much easier, even when I went from my AC86U to my new AX86U. But I never figured out where connmon data was stored (oh, yea, I do that also). I also never figured out where each of the scripts stored their settings so I could easily restore them, although I now think some might be stored in /opt/share. I also now think the settings for BackupMon are in /jffs/addons. So as of now, I envision the new user file would contain 4 lines: /opt/var/lib, /opt/share, /jffs/addons/YazDHCP.d, and /jffs/addons/BackupMon. Hard coding those locations would be a problem if any of those script writers decided to change any of their locations, but I figured I'd deal with that if and when that happens. But maybe there is a better way to deal with this, like maybe a common API for all amtm scripts that would identify its storage locations so BackupMon could query them and use that data. But that is obviously outside my wheelhouse. For now, just having the ability to add sub-dirs to the backup list that BackupMon uses would be a help.
 
When I ran nsru, I would also separately copy the /opt/var/lib directory, which contains the historical data for Chrony, Unbound, and VnStat. I would also separately copy /jffs/addons/YazDHCP.d which contains the data for YazDHCP. So when I reinstalled any of those scripts, or changed the USB drive, or needed to do a nuclear reset of the router to install a recent f/w version from an old version, I could restore that dir, and my old data for those scripts would be restored. Same for the YazDHCP dir. Copying that folder made my DHCP assignments so much easier, even when I went from my AC86U to my new AX86U. But I never figured out where connmon data was stored (oh, yea, I do that also). I also never figured out where each of the scripts stored their settings so I could easily restore them, although I now think some might be stored in /opt/share. I also now think the settings for BackupMon are in /jffs/addons. So as of now, I envision the new user file would contain 4 lines: /opt/var/lib, /opt/share, /jffs/addons/YazDHCP.d, and /jffs/addons/BackupMon. Hard coding those locations would be a problem if any of those script writers decided to change any of their locations, but I figured I'd deal with that if and when that happens. But maybe there is a better way to deal with this, like maybe a common API for all amtm scripts that would identify its storage locations so BackupMon could query them and use that data. But that is obviously outside my wheelhouse. For now, just having the ability to add sub-dirs to the backup list that BackupMon uses would be a help.
Is all that really necessary though? All those are included in the normal backup and are easily extracted and uploaded seperately.
 
Is all that really necessary though? All those are included in the normal backup and are easily extracted and uploaded separately.
My thought exactly late last night when I read this. BACKUPMON archives EVERYTHING from NVRAM, /jffs and the USB drive.
 
Is all that really necessary though? All those are included in the normal backup and are easily extracted and uploaded seperately.
My thought exactly late last night when I read this. BACKUPMON archives EVERYTHING from NVRAM, /jffs and the USB drive.

Likewise, @TonyK132... agree with the above. Everything is there in its entirety inside these tar.gz files. You can easily find these specific apps/folders/subfolders in order to pick and choose what files you'd want to extract to restore certain functionality or settings. In the most ideal situation, as what @JGrana just experienced... you'd want to restore these backups in their entirety back to your router, and you won't lose anything. All your scripts and settings will start as if nothing ever happened on reboot. ;)
 
Likewise, @TonyK132... agree with the above. Everything is there in its entirety inside these tar.gz files. You can easily find these specific apps/folders/subfolders in order to pick and choose what files you'd want to extract to restore certain functionality or settings. In the most ideal situation, as what @JGrana just experienced... you'd want to restore these backups in their entirety back to your router, and you won't lose anything. All your scripts and settings will start as if nothing ever happened on reboot. ;)
Just for an example, as I was trying to wrap my head around @TonyK132's original post, I did the following on my MacBook Pro to verify what I thought was in the archives:
  1. Copy tarball.tar.gz file from my NAS to empty folder (e.g. , Expand) on my MBP Desktop.
  2. Open Terminal on MPB, and navigate to folder: $ cd Desktop/Expand/
  3. Extract tarball: $ gunzip tarball.tar.gz
  4. Extract directories/files from tarball: $ tar -xvf tarball.tar.gz
You can do similar on the router itself, a Linux workstation, or WIndows PC using the appropriate tools.
 
Just for an example, as I was trying to wrap my head around @TonyK132's original post, I did the following on my MacBook Pro to verify what I thought was in the archives:
  1. Copy tarball.tar.gz file from my NAS to empty folder (e.g. , Expand) on my MBP Desktop.
  2. Open Terminal on MPB, and navigate to folder: $ cd Desktop/Expand/
  3. Extract tarball: $ gunzip tarball.tar.gz
  4. Extract directories/files from tarball: $ tar -xvf tarball.tar.gz
You can do similar on the router itself, a Linux workstation, or WIndows PC using the appropriate tools.
Yep! In Windows, you can literally browse and open the tar.gz files directly from Windows Explorer... and you can browse their contents as if they're folders on your PC. It's truly a walk in the park to find and extract any files you need from these master backup files. I don't see any real need to change this.
 
OK, OK. Based on your pushback emails, I started exploring the directory mappings to find the sub-dirs that are important to me. I'll figure this out for myself.
 
OK, OK. Based on your pushback emails, I started exploring the directory mappings to find the sub-dirs that are important to me. I'll figure this out for myself.

The hardest part is determining where all the supporting script files are located. Luckily the bulk are under /jffs/scripts and /jffs/addons
 
OK, OK. Based on your pushback emails, I started exploring the directory mappings to find the sub-dirs that are important to me. I'll figure this out for myself.
Not pushback, but rather providing you the detailed info so that you can succeed!
 
The hardest part is determining where all the supporting script files are located. Luckily the bulk are under /jffs/scripts and /jffs/addons
The historical data files are in the /opt/var/lib sub-dir. You do not copy that location. But it looks like that gets mapped to \entware\var\lib\ which you do copy.
 
The historical data files are in the /opt/var/lib sub-dir. You do not copy that location. But it looks like that gets mapped to \entware\var\lib\ which you do copy.

Interesting... So yeah, Backupmon basically copies the raw data across directly from the Ext USB drive and JFFS... I'm sure the OS itself has lots of these types of virtual directories that could come over with the contents of these folders and the actual router .cfg file. When these pieces are restored, everything comes back over as it was before.
 
Interesting... So yeah, Backupmon basically copies the raw data across directly from the Ext USB drive and JFFS... I'm sure the OS itself has lots of these types of virtual directories that could come over with the contents of these folders and the actual router .cfg file. When these pieces are restored, everything comes back over as it was before.
Yes, multiple levels of symbolic links:

opt -> tmp/opt
tmp/opt -> /tmp/mnt/USBDRIVE/entware
So /opt/var/lib equates to /tmp/mnt/USBDRIVE/entware/var/lib.
 
Yes, multiple levels of symbolic links:

opt -> tmp/opt
tmp/opt
-> /tmp/mnt/USBDRIVE/entware
So /opt/var/lib equates to /tmp/mnt/USBDRIVE/entware/var/lib.

"symbolic links"... that's what they're coined... thank you! :) Great info!
 
@Viktor Jaep I believe I'm seeing similar. Occasionally a daily backup will not be done, I only noticed last night when moving data from one raid to another. It's seems like something else may be resetting the cron list, losing the backupmon entry as well as other entries. I'll dig a little deeper for you when I next see it (not) happen.
@Viktor Jaep it happened again. No backup this morning and missing cron jobs.
AT the exact time of the backup cron my log went absolutely berserk with
Code:
Jan 31 01:15:03 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 01:15:03 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 01:15:03 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Then from 04:58Z
Code:
Jan 31 04:58:42 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 04:58:42 ripshod kernel: jffs2: read_cache_page() returned error: -5
Jan 31 04:58:42 ripshod kernel: jffs2: Error garbage collecting node at 02970fa4!
Jan 31 04:58:42 ripshod kernel: jffs2: No space for garbage collection. Aborting GC thread
Jan 31 04:59:55 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 04:59:55 ripshod kernel: jffs2: read_cache_page() returned error: -5
Jan 31 04:59:55 ripshod kernel: jffs2: Error garbage collecting node at 02970fa4!
Jan 31 05:00:02 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 05:00:02 ripshod kernel: jffs2: read_cache_page() returned error: -5
Jan 31 05:00:02 ripshod kernel: jffs2: Error garbage collecting node at 02970fa4!
Jan 31 05:00:06 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 05:00:06 ripshod kernel: jffs2: read_cache_page() returned error: -5
Which continues 3 or 4 times a minute and is still erroring now, 7.5 hours later, with absolutely nothing else being logged.

Do you think this is related? Is something hardware-wise failing?
 
@Viktor Jaep it happened again. No backup this morning and missing cron jobs.
AT the exact time of the backup cron my log went absolutely berserk with
Code:
Jan 31 01:15:03 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 01:15:03 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 01:15:03 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Then from 04:58Z
Code:
Jan 31 04:58:42 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 04:58:42 ripshod kernel: jffs2: read_cache_page() returned error: -5
Jan 31 04:58:42 ripshod kernel: jffs2: Error garbage collecting node at 02970fa4!
Jan 31 04:58:42 ripshod kernel: jffs2: No space for garbage collection. Aborting GC thread
Jan 31 04:59:55 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 04:59:55 ripshod kernel: jffs2: read_cache_page() returned error: -5
Jan 31 04:59:55 ripshod kernel: jffs2: Error garbage collecting node at 02970fa4!
Jan 31 05:00:02 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 05:00:02 ripshod kernel: jffs2: read_cache_page() returned error: -5
Jan 31 05:00:02 ripshod kernel: jffs2: Error garbage collecting node at 02970fa4!
Jan 31 05:00:06 ripshod kernel: jffs2: Data CRC 075e7209 != calculated CRC 7d16ee8b for node at 02970fa4
Jan 31 05:00:06 ripshod kernel: jffs2: read_cache_page() returned error: -5
Which continues 3 or 4 times a minute and is still erroring now, 7.5 hours later, with absolutely nothing else being logged.

Do you think this is related? Is something hardware-wise failing?
That error can sometimes indicate that the jffs partition is full, or becoming full. Sometimes a reboot will sort it out. Otherwise examine what processes you have that might be writing to jffs.

If it's not that then your jffs filesystem will probably need to be reformatted.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top