What's new

SNMP - AX88u

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Sadly this json files didn't exist at the time I wrote my plugin. This helps a lot by generating the data if there are generated by the os constantly and not only if the UI is opend in the browser.
Actually this is an approach I use to fetch data in raspberry (e.g. from GPIO) to reduce load and feed data to snmp extensions.
I use a systemd timer to generate data with a single query and produce a number of micro-files saved in a tiny ram partition. Then SNMP can fetch data from there at light-speed whenever there is a request.
As said I use cacti with a 5 minute polling cycle, so I can prepare data setting the timer trigger at the minute 4 of the cycle +/-30 seconds (using the randomization of systemd timers - this also allows to avoid regularly stack multiple scripts at the same moment - I know it doesn't make the measurement extremely reliable, but in the average I've found that this an acceptable compromise between system load and measurement precision - e.g. temperatures).
 
hmm, I'm wondering about your concerns about the system load. On my router i can't even see a difference in the cpu load graph if i activate zabbix and my plugin at the same time.
The RT-AX88u has a 1,8GHz-Quad-Core CPU and is really strong. Do you have so many things running on it or so many Wi-Fi clients, that you load your router so much?
 
If I may suggest, on the subject of subsequent data collection/storage/analysis on rPi.

It can be perfectly suitable solution for home/small office use, if you set it up right. I'm running Docker 20.10.6 on Ubuntu 20.04 Server on rPi 4 8GB. Telegraf+InfluxDB+Grafana for metrics, Graylog (using ElasticSearch and MongoDB) for logs, plus few other containers like Mosquitto MQTT broker, NodeRED, NextCloud, nginx proxy, etc. It all works like a charm.

There're couple of things that help significantly:
- Forget SD cards and USB sticks. rPi4 has USB 3.0 ports, so the external SSD drives are a real viable option now (USB boot is now supported natively) and I would strongly suggest you move to these. I'm running Samsung 970 Evo Plus in UGREEN NVMe enclosure, which is an overkill in a way, but you'd do perfectly fine with something like Crucial MX500 and Startech SATA to USB adapter which are really cheap. The difference compared to SD card is massive, especially for database workloads, not mentioning the reliability.
- RAM is a king. 8GB really makes a big difference on rPi.
- Increasing the swapfile size significantly according to your RAM / usage (Ubuntu has a nice size guide in their docs) helps a lot too.

I have found that especially InfluxDB 2.x can be quite resource heavy (memory) if you let it reign, so limiting it in Docker helps to balance the system better. Same applies to Graylog's Java. But apart from that, nothing is too resource heavy even when monitoring a lot of stuff.

As per the data capture. I think Telegraf, despite its qualities, is quite a heavy solution for our poor little routers and I'd say that SNMP is probably much better option. That applies for most of the other remote agents snooping around the system and (re)collecting data, that's being produced in much more router-friendly way using SNMP.

Just my 2p.
 
hmm, I'm wondering about your concerns about the system load. On my router i can't even see a difference in the cpu load graph if i activate zabbix and my plugin at the same time.
The RT-AX88u has a 1,8GHz-Quad-Core CPU and is really strong. Do you have so many things running on it or so many Wi-Fi clients, that you load your router so much?
Sorry for the delay, had to reinstall the RPi I use for testing (btw I've added an SSD on the new build, that improves the performance a lot but makes the setup bulky...) and I cannot repeat the test right now (need to reinstall a few things first, since I've decided it was the right time to clean up test bench I've started a new clean installation).

Also I've got an upgrade from 1GBit to 2.5Gbit and I'm a bit busy testing how to get the most out of it (unfortunately the ISP modem doesn't have link aggregation...).

The higher CPU usage was using only extstats pushing to influxdb (with all monitoring enabled, excluding conStats and speedStats modules).
Anyway I have about 20 wifi clients (connected for about 80% of the time) some one 2,4GHz and some on 5GHz plus about 10 devices connected via ethernet.
The CPU usage was not tragic, practically speaking I could live with a 20% CPU usage, also the system load didn't change significantly (meaning that it should be sustainable - but I didn't perform any stress test), however I feel like the load could be much lower.
Note: this was about cup load, not system load.

Concerning Zabbix, indeed Zabbix agent didn't influence significantly either the system load nor the CPU usage on the router. The issue there is more on the RPi resources. However I've tested it only with the plain v.5.0.x agent, which is actually quite useless unless extended with custom agent modules to monitor the more interesting data...
 
If I may suggest, on the subject of subsequent data collection/storage/analysis on rPi.

It can be perfectly suitable solution for home/small office use, if you set it up right. I'm running Docker 20.10.6 on Ubuntu 20.04 Server on rPi 4 8GB. Telegraf+InfluxDB+Grafana for metrics, Graylog (using ElasticSearch and MongoDB) for logs, plus few other containers like Mosquitto MQTT broker, NodeRED, NextCloud, nginx proxy, etc. It all works like a charm.

There're couple of things that help significantly:
- Forget SD cards and USB sticks. rPi4 has USB 3.0 ports, so the external SSD drives are a real viable option now (USB boot is now supported natively) and I would strongly suggest you move to these. I'm running Samsung 970 Evo Plus in UGREEN NVMe enclosure, which is an overkill in a way, but you'd do perfectly fine with something like Crucial MX500 and Startech SATA to USB adapter which are really cheap. The difference compared to SD card is massive, especially for database workloads, not mentioning the reliability.
- RAM is a king. 8GB really makes a big difference on rPi.
- Increasing the swapfile size significantly according to your RAM / usage (Ubuntu has a nice size guide in their docs) helps a lot too.

I have found that especially InfluxDB 2.x can be quite resource heavy (memory) if you let it reign, so limiting it in Docker helps to balance the system better. Same applies to Graylog's Java. But apart from that, nothing is too resource heavy even when monitoring a lot of stuff.

As per the data capture. I think Telegraf, despite its qualities, is quite a heavy solution for our poor little routers and I'd say that SNMP is probably much better option. That applies for most of the other remote agents snooping around the system and (re)collecting data, that's being produced in much more router-friendly way using SNMP.

Just my 2p.
I've just rebuilt (due to an SD card failure) the test RPi4 with a low cost SSD and the throughout is amazing, however I'm still sceptical in adopting such a setup for production/stable setup. This may sound strange, but the main reason is that I can't find a good case (at a reasonable price) to enclose everything properly...

Concerning Telegraf, I agree, like that is an overkill, mostly for RAM - 85MB or more just for a monitoring agent is not acceptable on a device with 1GB. Unless someone is willing to recompile an optimized version removing 90% of the modules that are useless on a router...

Actually the best overall solution regardless of server-side resources required (but with an RPi4 with 8GB and SSD should be usable) is an ELK stack, with that you can get all data in one place with cross references linking events to logs. But that is even a bigger gun :cool:
For home-office, at least for me, is sufficient to have a setup with cacti and rsyslog... and maybe weathermap to have a nice overview of the data flows without getting into complex (and heavy) stuff like NetFlow.
P.S.: for logging I've tested Loki + Grafana, that works nicely but still that much more than I really need.

The biggest issue here is that on AX88U snmp doesn't work :confused: (which brings back the main topic of the thread), everything else can be setup efficiently with the technology most appropriate technology for the specific needs.

IMHO, the best thing would be to write a single script in Python, pre-compiled in byte-code, to run with cron (since we sent have systemd) every 5 minutes to build files with data in a directory on ram, and extend SNMP to read data from there... minimum overhead on the router :)
 
Great discussion on using RPi's for monitoring and other functions related to routers.
With the 1Gb ethernet on the Pi4, hardwired to my AX88u, it makes a very nice "coprocessor".

Very good tips by @Phantomski - I agree with all especially using an SSD.
One more tip - as @xtv mentioned, adding an external/wired SSD starts to make things a bit bulky. I now use a case from Argon that includes a slot for an M.2 SSD card. Highly recommended.

 
Great discussion on using RPi's for monitoring and other functions related to routers.
With the 1Gb ethernet on the Pi4, hardwired to my AX88u, it makes a very nice "coprocessor".

Very good tips by @Phantomski - I agree with all especially using an SSD.
One more tip - as @xtv mentioned, adding an external/wired SSD starts to make things a bit bulky. I now use a case from Argon that includes a slot for an M.2 SSD card. Highly recommended.


Ahh that case seems an excellent solution to the problem. :)
However the price on Amazon resellers seems to be twice the original price...

I need an RPi not only to gather data for monitoring, but also to serve the local network with proper tools (e.g. a bind9 DNS server to integrare and replicate some local/remote zones, isc-dhcp-server to serve special configs to clients, avahi to publish on the local network some remote servers, exim4 to serve the local network, etc...).
That's why I want to keep the load on it as low as possible.
 
BTW, for those interested in extending SNMP even on AX88u, there is a potential workaround.

Unfortunately it is not an easy solution to secure properly (needs at the very least some tinkering with firewall, since it doesn't support SNMPv3), nor to implement, but it is something indeed worth watching for developments: https://github.com/delimitry/snmp-server

It's a simple Python based snmp-server that can be extended with just a bit of Python knowledge to serve special data.
You can even use snmpset commands to perform custom actions, however I wouldn't trust any snmp input unless it has been sent via SNMPv3 secured connections (I usually don't even publish anything at all via SNMPv1/SNMPv2c).

Also it doesn't serve anything natively, thus so far it is something for special purposes only (maybe on a custom port with traffic limited by the firewall based on client's IP/MAC).
 
Great discussion on using RPi's for monitoring and other functions related to routers.
With the 1Gb ethernet on the Pi4, hardwired to my AX88u, it makes a very nice "coprocessor".

Very good tips by @Phantomski - I agree with all especially using an SSD.
One more tip - as @xtv mentioned, adding an external/wired SSD starts to make things a bit bulky. I now use a case from Argon that includes a slot for an M.2 SSD card. Highly recommended.

Thanks for the great tip about the argon case. I have a few flirc cases that have been serving me very well, and it's great to know there's one that can also support an internal m2 ssd, especially given that I have a few lying around the house. :)

Have you tested the case with the fan disconnected (or set not to spin up) by any chance? I'd prefer to have a passively cooled solution and according to this video it should mostly be fine, but I do wonder how soon the ssd might start to throttle with the fan off.
 
IMHO, the best thing would be to write a single script in Python, pre-compiled in byte-code, to run with cron (since we sent have systemd) every 5 minutes to build files with data in a directory on ram, and extend SNMP to read data from there... minimum overhead on the router :)
This is more or less what I've ended up doing. Though not in byte-code format (perhaps yet? :) ), I've just finished an initial version of the script (heavily based on corgan2224's extstats add-on) that pulls wireless client stats, processes in memory and sends them to InfluxDB directly. Everything else is gathered via snmp (including a few snmp extend scripts for single data points).

With the script being run every minute, performance impact on the router (RT-AX86U) has pretty much been negligible. I couldn't really notice much change in terms of system load (instead of hovering between 1.0-1.1 it now hovers between 1.0-1.2, so between about 25-30% per core). Available memory did drop by about 50-60MB, but I've enabled the Trend Micro stack at around the same time and I think chances are good that was more the reason for the change(s).

1618848458877.png
 
Last edited:
if only this could work for the 86u :(
Well, it may not be out of reach. If I remember right, what mainly kept you from trying to implement monitoring trough mini-snmpd on the AC86U was the lack of snmp extend support.
I'm only leveraging extend to make about a dozen additional data points available through snmp and those scripts could be pretty easily converted into one that would send the data to InfluxDB directly instead, similarly to the one I'm now using for the wireless client stats.
So if mini-snmpd's other limitations are not of concern, this may be an option. :)
 
Well, it may not be out of reach. If I remember right, what mainly kept you from trying to implement monitoring trough mini-snmpd on the AC86U was the lack of snmp extend support.
I'm only leveraging extend to make about a dozen additional data points available through snmp and those scripts could be pretty easily converted into one that would send the data to InfluxDB directly instead, similarly to the one I'm now using for the wireless client stats.
So if mini-snmpd's other limitations are not of concern, this may be an option. :)
i stopped investigating when extend wasn't available. whether there are more limitations I don't know
 

Looks reasonably easy to use. Just need to delve into your code to figure out what values, tags etc. are needed, unless you have a reference already?
The script includes the DB calls with the necessary variables referenced.
I'm using InfluxDBv2, but I've left in the 1.x call from the original (commented out), so that's there as well. Measurement, tags and fields are labeled accordingly, so it should be pretty straightforward.
 
Thanks for the great tip about the argon case. I have a few flirc cases that have been serving me very well, and it's great to know there's one that can also support an internal m2 ssd, especially given that I have a few lying around the house. :)

Have you tested the case with the fan disconnected (or set not to spin up) by any chance? I'd prefer to have a passively cooled solution and according to this video it should mostly be fine, but I do wonder how soon the ssd might start to throttle with the fan off.
I have not tested with the internal fan disabled. I installed the argon script on Raspbian and left the fan trigger points at default.
The case does get warm to the touch. I also hear the fan cycling pretty often. I am sitting in an office now where it is next to me. The fan goes on for ~2-3 mins, then off for ~1 min. Fairly light load (I'm running Unbound-Pi Hole and an Emby server).
It's not too noisy, but you do know when it turns on.
 
I've just rebuilt (due to an SD card failure) the test RPi4 with a low cost SSD and the throughout is amazing, however I'm still sceptical in adopting such a setup for production/stable setup. This may sound strange, but the main reason is that I can't find a good case (at a reasonable price) to enclose everything properly...

Concerning Telegraf, I agree, like that is an overkill, mostly for RAM - 85MB or more just for a monitoring agent is not acceptable on a device with 1GB. Unless someone is willing to recompile an optimized version removing 90% of the modules that are useless on a router...

Actually the best overall solution regardless of server-side resources required (but with an RPi4 with 8GB and SSD should be usable) is an ELK stack, with that you can get all data in one place with cross references linking events to logs. But that is even a bigger gun :cool:
For home-office, at least for me, is sufficient to have a setup with cacti and rsyslog... and maybe weathermap to have a nice overview of the data flows without getting into complex (and heavy) stuff like NetFlow.
P.S.: for logging I've tested Loki + Grafana, that works nicely but still that much more than I really need.

The biggest issue here is that on AX88U snmp doesn't work :confused: (which brings back the main topic of the thread), everything else can be setup efficiently with the technology most appropriate technology for the specific needs.

IMHO, the best thing would be to write a single script in Python, pre-compiled in byte-code, to run with cron (since we sent have systemd) every 5 minutes to build files with data in a directory on ram, and extend SNMP to read data from there... minimum overhead on the router :)
Some good tips there, thanks a lot. I've yet to deploy and test the complete ELK stack, but for log collection and analysis, it's quite a good standard. Another one I'd like to explore is Prometheus.
For now I've settled with Graylog as they have quite well designed customisable pipelines that can do some cool magic with plain unstructured syslogs and shift them into really rich filtered overviews with graphing/alerting for the most important data. The only problem with Graylog was they don't really support ARM architecture (shame on them ;)), so I had to modify the Dockerfile and build the image from scratch.

I'm starting to think for our purposes some simple scripting + InfluxDB API (or MQTT for example) would be a way forward. Considering it really needs to be limited just to data output without router control and running on local network, it might not even need authentication/encryption (although preferable of course).

Which makes me think, with the MQTT route, we would have something truly universal, as you can then ingest the topics with just about anything, from a simple on-site ESP32 / OLED display monitor, to just about any time series database, logging/metrics solution, even Home Assistant. With a local broker it would be even possible to shift the data out securely. If I had a vote, I'd go that way.
 
Great discussion on using RPi's for monitoring and other functions related to routers.
With the 1Gb ethernet on the Pi4, hardwired to my AX88u, it makes a very nice "coprocessor".

Very good tips by @Phantomski - I agree with all especially using an SSD.
One more tip - as @xtv mentioned, adding an external/wired SSD starts to make things a bit bulky. I now use a case from Argon that includes a slot for an M.2 SSD card. Highly recommended.


I can only second that recommendation. I'm using the Argon ONE V2 at the moment, it's a great, well thought out and built case and I'm really happy with it. You can manually set the trigger temperatures for various fan speeds and the whole enclosure works as a passive cooler. With my current loads, the fan hardly ever comes up. And when it does, it basically cools it down by 5-10ºC literally in seconds.

The only thing I'm not 100% sure with right now are their M.2 version cases. Some of the SSDs are really quite power hungry (especially during the data folding) and some of the rPi4s are a tad sketchy with the USB3.0 max power available. I had some data corruption issues on mine when powering it straight from the USB port (even with good USB-C power sources), so I had to use external powered USB hub (which I'd recommend anyway). Their solution with the external "jumper" USB dongle is not ideal and the success with various SSDs vary.
 
Ahh that case seems an excellent solution to the problem. :)
However the price on Amazon resellers seems to be twice the original price...

I need an RPi not only to gather data for monitoring, but also to serve the local network with proper tools (e.g. a bind9 DNS server to integrare and replicate some local/remote zones, isc-dhcp-server to serve special configs to clients, avahi to publish on the local network some remote servers, exim4 to serve the local network, etc...).
That's why I want to keep the load on it as low as possible.
With the loads and rPi as a local server kinda stuff...

I'm a big fan of Docker. You can setup your rPi core OS with pretty light Linux server without GUI and any of the usual bloatware (I like Ubuntu Server, which might not be the best example, but with 20.04 LTS it's rPi-ready) and then run very light container-per-service solutions to your heart's content. With all the networking, logging and resource Docker tricks available today it's really a very neat way how to move from the traditional one system for all solution to something that's very easy to manage, yet it still runs fast and efficient on, well, one server ;)
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top