What's new

Entware Problem mounting opt / installing entware

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

I am pretty sure you will need custom iptables rules if you want it to be an exit node.
I don't need anything fancy from tailscale, just remote access to the machine.
Check tailscaled logs
Only log file I managed to find was:
./tailscaled.log1.txt:
Code:
{"logtail": {"client_time": "2023-11-10T14:02:14.96889582Z","proc_id": 1078765275,"proc_seq": 2}, "text": "Program starting: v1.46.1, Go 1.20.7: []string{\"tailscaled\", \"--state=/opt/var/tailscaled.state\"}\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.97001858Z","proc_id": 1078765275,"proc_seq": 3}, "text": "LogID: 6027652d688b387812381efb42d046f8a50b4651b3cf7898b9d43e54eae34723\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.97012602Z","proc_id": 1078765275,"proc_seq": 4}, "text": "logpolicy: using UserCacheDir, \"/root/.cache/Tailscale\"\nlogpolicy.ConfigFromFile /root/.cache/Tailscale/tailscaled.log.conf: open /root/.cache/Tailscale/tailscaled.log.conf: no such file or directory\nlogpolicy.Config.Validate for /root/.cache/Tailscale/tailscaled.log.conf: config is nil\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.97206102Z","proc_id": 1078765275,"proc_seq": 5}, "text": "wgengine.NewUserspaceEngine(tun \"tailscale0\") ...\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.97226394Z","proc_id": 1078765275,"proc_seq": 6}, "text": "Linux kernel version: 4.19.183\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.99114416Z","proc_id": 1078765275,"proc_seq": 7}, "text": "'modprobe tun' successful\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.99140596Z","proc_id": 1078765275,"proc_seq": 8}, "text": "/dev/net/tun: Dcrw-------\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.99158904Z","proc_id": 1078765275,"proc_seq": 9}, "text": "wgengine.NewUserspaceEngine(tun \"tailscale0\") error: tstun.New(\"tailscale0\"): CreateTUN(\"tailscale0\") failed; /dev/net/tun does not exist\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.9922122Z","proc_id": 1078765275,"proc_seq": 10}, "text": "flushing log.\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.99235186Z","proc_id": 1078765275,"proc_seq": 11}, "text": "logger closing down\n"}
{"logtail": {"client_time": "2023-11-10T14:02:15.8120439Z","proc_id": 1078765275,"proc_seq": 12}, "text": "getLocalBackend error: createEngine: tstun.New(\"tailscale0\"): CreateTUN(\"tailscale0\") failed; /dev/net/tun does not exist\n"}

Doesn't seem to be much help

I also rebooted the router started the service ran "tailscale up" and the service was already dead...
I have no Idea why they service just dies.
 
CreateTUN(\"tailscale0\") failed; /dev/net/tun does not exist\n"}
That's why it failed. Surely tailscale should have already defined the tunnel?
 
I don't need anything fancy from tailscale, just remote access to the machine.

Only log file I managed to find was:
./tailscaled.log1.txt:
Code:
{"logtail": {"client_time": "2023-11-10T14:02:14.96889582Z","proc_id": 1078765275,"proc_seq": 2}, "text": "Program starting: v1.46.1, Go 1.20.7: []string{\"tailscaled\", \"--state=/opt/var/tailscaled.state\"}\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.97001858Z","proc_id": 1078765275,"proc_seq": 3}, "text": "LogID: 6027652d688b387812381efb42d046f8a50b4651b3cf7898b9d43e54eae34723\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.97012602Z","proc_id": 1078765275,"proc_seq": 4}, "text": "logpolicy: using UserCacheDir, \"/root/.cache/Tailscale\"\nlogpolicy.ConfigFromFile /root/.cache/Tailscale/tailscaled.log.conf: open /root/.cache/Tailscale/tailscaled.log.conf: no such file or directory\nlogpolicy.Config.Validate for /root/.cache/Tailscale/tailscaled.log.conf: config is nil\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.97206102Z","proc_id": 1078765275,"proc_seq": 5}, "text": "wgengine.NewUserspaceEngine(tun \"tailscale0\") ...\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.97226394Z","proc_id": 1078765275,"proc_seq": 6}, "text": "Linux kernel version: 4.19.183\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.99114416Z","proc_id": 1078765275,"proc_seq": 7}, "text": "'modprobe tun' successful\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.99140596Z","proc_id": 1078765275,"proc_seq": 8}, "text": "/dev/net/tun: Dcrw-------\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.99158904Z","proc_id": 1078765275,"proc_seq": 9}, "text": "wgengine.NewUserspaceEngine(tun \"tailscale0\") error: tstun.New(\"tailscale0\"): CreateTUN(\"tailscale0\") failed; /dev/net/tun does not exist\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.9922122Z","proc_id": 1078765275,"proc_seq": 10}, "text": "flushing log.\n"}
{"logtail": {"client_time": "2023-11-10T14:02:14.99235186Z","proc_id": 1078765275,"proc_seq": 11}, "text": "logger closing down\n"}
{"logtail": {"client_time": "2023-11-10T14:02:15.8120439Z","proc_id": 1078765275,"proc_seq": 12}, "text": "getLocalBackend error: createEngine: tstun.New(\"tailscale0\"): CreateTUN(\"tailscale0\") failed; /dev/net/tun does not exist\n"}

Doesn't seem to be much help

I also rebooted the router started the service ran "tailscale up" and the service was already dead...
I have no Idea why they service just dies.
Oh, I had the same issue I think, tun module isn't loaded.

This is what my startup script ran before tailscaled daemon started:
Code:
! lsmod | grep -q tun && modprobe tun && sleep 1

You should edit init.d script for tailscaled and add this just before rc.func line:
Code:
case $1 in
    start|restart)
        ! lsmod | grep -q tun && modprobe tun && sleep 1
    ;;
esac
It should now work if you restart it through the init.d script.
 
It should now work if you restart it through the init.d script.
It is able to start now, but it still dies after killing the ping commad.
logs:
Code:
{"logtail": {"client_time": "2023-11-10T14:53:26.97144668Z","proc_id": 1690510277,"proc_seq": 174}, "text": "tailscaled got signal interrupt; shutting down\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.97223906Z","proc_id": 1690510277,"proc_seq": 175}, "text": "control: client.Shutdown()\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.97236304Z","proc_id": 1690510277,"proc_seq": 176}, "text": "control: client.Shutdown: inSendStatus=0\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.97246978Z","proc_id": 1690510277,"proc_seq": 177}, "v":1,"text": "control: authRoutine: context done.\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.97253276Z","proc_id": 1690510277,"proc_seq": 178}, "v":1,"text": "control: authRoutine: state:synchronized; goal=nil paused=false\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.9725807Z","proc_id": 1690510277,"proc_seq": 179}, "v":1,"text": "control: authRoutine: quit\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.97289332Z","proc_id": 1690510277,"proc_seq": 180}, "v":1,"text": "control: PollNetMap: context canceled\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.97295986Z","proc_id": 1690510277,"proc_seq": 181}, "v":1,"text": "control: mapRoutine: state:authenticated\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.97309632Z","proc_id": 1690510277,"proc_seq": 182}, "text": "control: mapRoutine: quit\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.97350052Z","proc_id": 1690510277,"proc_seq": 183}, "text": "control: Client.Shutdown done.\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.97427066Z","proc_id": 1690510277,"proc_seq": 184}, "text": "magicsock: closing connection to derp-18 (conn-close), age 19s\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.97437096Z","proc_id": 1690510277,"proc_seq": 185}, "text": "magicsock: closing connection to derp-4 (conn-close), age 2s\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.97443738Z","proc_id": 1690510277,"proc_seq": 186}, "text": "magicsock: 0 active derp conns\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.9755945Z","proc_id": 1690510277,"proc_seq": 187}, "text": "trample: resolv.conf changed from what we expected. did some other program interfere? current contents: \"nameserver 192.168.1.111\\nnameserver 1.1.1.1\\n\"\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.9764238Z","proc_id": 1690510277,"proc_seq": 188}, "v":2,"text": "wg: Routine: receive incoming mkReceiveFunc - stopped\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.9766735Z","proc_id": 1690510277,"proc_seq": 189}, "v":2,"text": "wg: Routine: receive incoming mkReceiveFunc - stopped\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.98672682Z","proc_id": 1690510277,"proc_seq": 190}, "text": "monitor: ip rule deleted: {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:254 Protocol:0 Scope:0 Type:1 Flags:0 Attributes:{Dst:<nil> Src:<nil> Gateway:<nil> OutIface:0 Priority:5210 Table:254 Mark:16711680 Pref:<nil> Expires:<nil> Metrics:<nil> Multipath:[]}}\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.9876031Z","proc_id": 1690510277,"proc_seq": 191}, "text": "monitor: ip rule deleted: {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:253 Protocol:0 Scope:0 Type:1 Flags:0 Attributes:{Dst:<nil> Src:<nil> Gateway:<nil> OutIface:0 Priority:5230 Table:253 Mark:16711680 Pref:<nil> Expires:<nil> Metrics:<nil> Multipath:[]}}\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.9883091Z","proc_id": 1690510277,"proc_seq": 192}, "text": "monitor: ip rule deleted: {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:0 Protocol:0 Scope:0 Type:7 Flags:0 Attributes:{Dst:<nil> Src:<nil> Gateway:<nil> OutIface:0 Priority:5250 Table:0 Mark:16711680 Pref:<nil> Expires:<nil> Metrics:<nil> Multipath:[]}}\n"}
{"logtail": {"client_time": "2023-11-10T14:53:26.98856922Z","proc_id": 1690510277,"proc_seq": 193}, "text": "monitor: ip rule deleted: {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:52 Protocol:0 Scope:0 Type:1 Flags:0 Attributes:{Dst:<nil> Src:<nil> Gateway:<nil> OutIface:0 Priority:5270 Table:52 Mark:0 Pref:<nil> Expires:<nil> Metrics:<nil> Multipath:[]}}\n"}
{"logtail": {"client_time": "2023-11-10T14:53:27.1367486Z","proc_id": 1690510277,"proc_seq": 194}, "v":2,"text": "wg: Device closing\n"}
{"logtail": {"client_time": "2023-11-10T14:53:27.13905098Z","proc_id": 1690510277,"proc_seq": 195}, "v":1,"text": "monitor: RTM_DELROUTE: src=100.80.140.150/0, dst=100.80.140.150/32, gw=, outif=28, table=255\n"}
{"logtail": {"client_time": "2023-11-10T14:53:27.19480168Z","proc_id": 1690510277,"proc_seq": 196}, "v":2,"text": "wg: Routine: receive incoming receiveDERP - stopped\n"}
{"logtail": {"client_time": "2023-11-10T14:53:27.19495904Z","proc_id": 1690510277,"proc_seq": 197}, "v":2,"text": "wg: [bDK5u] - Stopping\n"}
{"logtail": {"client_time": "2023-11-10T14:53:27.19513214Z","proc_id": 1690510277,"proc_seq": 198}, "v":2,"text": "wg: Device closed\n"}
{"logtail": {"client_time": "2023-11-10T14:53:27.1952862Z","proc_id": 1690510277,"proc_seq": 199}, "text": "flushing log.\n"}
{"logtail": {"client_time": "2023-11-10T14:53:27.19541114Z","proc_id": 1690510277,"proc_seq": 200}, "text": "logger closing down\n"}
 
tailscaled got signal interrupt; shutting down
So, for example CTRL-C or SIGINT signal.
Maybe you're executing that ping command in the same terminal session you used to start tailscaled?
Maybe tailscaled does not goes into background properly, in that case it's a bug.
 
So, for example CTRL-C or SIGINT signal.
Maybe you're executing that ping command in the same terminal session you used to start tailscaled?
Maybe tailscaled does not goes into background properly, in that case it's a bug.
Seems that might be it, I don't have a problem when I run 2 different ssh sessions.

What should I put in the start script to let the tailscaled service run in background?
 
Seems that might be it, I don't have a problem when I run 2 different ssh sessions.

What should I put in the start script to let the tailscaled service run in background?
So a Tailscale daemon bug (or Entware build specific).
I would also test if closing the terminal session doesn't kill the daemon.

Modify S06tailscaled further:
Code:
#!/bin/sh

ENABLED=yes
PROCS=tailscaled
ARGS="--state=/opt/var/tailscaled.state > /dev/null 2>&1 &"
PREARGS="nohup"
DESC=$PROCS
PATH=/opt/sbin:/opt/bin:/opt/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

case $1 in
    start|restart)
        ! lsmod | grep -q tun && modprobe tun && sleep 1
    ;;
esac

. /opt/etc/init.d/rc.func
Running it with nohup, forcing all output to /dev/null and putting it to background.
 
I would also test if closing the terminal session doesn't kill the daemon.
Closing the session does indeed kill the daemon.
So a Tailscale daemon bug (or Entware build specific).
It seems that Eentware doesn't come with the latest version of tailscale, what would be the best way to get the latest version installed?
Should I just ask on the Entware github, or is it simple enough that I can install in myself?

I tried your code, but for some reason Tailscaled isn't running after a reboot, not sure why..
 
Closing the session does indeed kill the daemon.
Seems like a bug, they don't daemonize properly or something.

Should I just ask on the Entware github
Yes, you can also try official ARM binaries if Tailscale provides them but unless they are statically linked they might not work.

I tried your code, but for some reason Tailscaled isn't running after a reboot, not sure why..
Does it run if you manually run the init.d script? Perhaps I made a mistake but it looks solid to me.
 
Does it run if you manually run the init.d script? Perhaps I made a mistake but it looks solid to me.
Nope, looks like something is wrong with the script

Code:
Admin@TUF-AX3000_V2-5848:/tmp/home/root# /opt/etc/init.d/S06tailscaled start
 Starting tailscaled...              done.
Admin@TUF-AX3000_V2-5848:/tmp/home/root# tailscale status
failed to connect to local tailscaled; it doesn't appear to be running
Admin@TUF-AX3000_V2-5848:/tmp/home/root# /opt/etc/init.d/S06tailscaled check
 Checking tailscaled...              dead.
Admin@TUF-AX3000_V2-5848:/tmp/home/root#
 
Nope, looks like something is wrong with the script

Code:
Admin@TUF-AX3000_V2-5848:/tmp/home/root# /opt/etc/init.d/S06tailscaled start
 Starting tailscaled...              done.
Admin@TUF-AX3000_V2-5848:/tmp/home/root# tailscale status
failed to connect to local tailscaled; it doesn't appear to be running
Admin@TUF-AX3000_V2-5848:/tmp/home/root# /opt/etc/init.d/S06tailscaled check
 Checking tailscaled...              dead.
Admin@TUF-AX3000_V2-5848:/tmp/home/root#
Try removing nohup from PREARGS and trying again.
If that doesn't work I would suggest trying asking about this on Entware repo or Tailscale forums because I'm out of ideas.
 
Try removing nohup from PREARGS and trying again.
Didn't help.

They have statically linked binaries on their website, I have downloaded and extracted them.
Do I need to just replace them with the ones in the Entware drive's bin directory, am I getting this right or should I do something else?
 
Maybe you could try the "screen" app from entware. It allowed me to run a process in the background on my router unattended. Nothing else seemed to work for me. This is the syntax I used:

Code:
screen -S scriptname -dm sh -c "/path/to/script/script.sh $ARGS > /path/to/log/script.log"
 
Maybe you could try the "screen" app from entware. It allowed me to run a process in the background on my router unattended. Nothing else seemed to work for me. This is the syntax I used:

Code:
screen -S scriptname -dm sh -c "/path/to/script/script.sh $ARGS > /path/to/log/script.log"
While it works when I invoke the command manually. Once I put it inside the start script it doesn't..
I also can't replace the script_usbmount with it because screen is only available after the start script mounts the drive ..
 
While it works when I invoke the command manually. Once I put it inside the start script it doesn't..
I also can't replace the script_usbmount with it because screen is only available after the start script mounts the drive ..
But you could again modify S06tailscaled to use screen in PREARGS:
Code:
#!/bin/sh

ENABLED=yes
PROCS=tailscaled
ARGS="--state=/opt/var/tailscaled.state"
PREARGS="screen -dmS tailscaled-screen"
DESC=$PROCS
PATH=/opt/sbin:/opt/bin:/opt/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

case $1 in
    start|restart)
        ! lsmod | grep -q tun && modprobe tun && sleep 1
    ;;
esac

. /opt/etc/init.d/rc.func
I just tested this and it works!
 
But you could again modify S06tailscaled to use screen in PREARGS:
Code:
#!/bin/sh

ENABLED=yes
PROCS=tailscaled
ARGS="--state=/opt/var/tailscaled.state"
PREARGS="screen -dmS tailscaled-screen"
DESC=$PROCS
PATH=/opt/sbin:/opt/bin:/opt/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

case $1 in
    start|restart)
        ! lsmod | grep -q tun && modprobe tun && sleep 1
    ;;
esac

. /opt/etc/init.d/rc.func
I just tested this and it works!
Yep, that did it now it works straight after a reboot.
I really appreciate the help you and the others gave me!

Just a quick question, why would that work but not the startup.sh. Was it bad shell syntax or something else?
 
Just a quick question, why would that work but not the startup.sh. Was it bad shell syntax or something else?
There is no reason the posted startup.sh wouldn't work.
Are you sure you paste the file correctly onto the router?
Note that when using vi command you gotta press A key before pasting.
 
Yep, that did it now it works straight after a reboot.
I really appreciate the help you and the others gave me!

Just a quick question, why would that work but not the startup.sh. Was it bad shell syntax or something else?
I never could figure out why my background processes kept stopping unless I used screen. One thought that maybe it was a security guardian or something doing it, so I stopped digging.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top