What's new
  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

New Script: TurboAsusSec (Beta)--Foray into the world of AI development

For a different router, I was trying to get vnstat running. I wanted to get a daily email like dn-vnstat does, except I wanted to cycle through not only the available graphs, but also the available interfaces to monitor, ala luci's implementation of vnstat. dn-vnstat has a tight little bit of code for generating the graphs, putting them into an email format and building the email.

So on a lark, I asked Gemini to produce a script that would do that. And it did! And it ran (well, almost)! And it was as ugly a coding job as you can imagine: four interfaces times five graphs handled in 20 separate individualized routines taking up upwards of 200 lines of code. Instead of two customizable loops of eight lines of code.
 
For a different router, I was trying to get vnstat running. I wanted to get a daily email like dn-vnstat does, except I wanted to cycle through not only the available graphs, but also the available interfaces to monitor, ala luci's implementation of vnstat. dn-vnstat has a tight little bit of code for generating the graphs, putting them into an email format and building the email.

So on a lark, I asked Gemini to produce a script that would do that. And it did! And it ran (well, almost)! And it was as ugly a coding job as you can imagine: four interfaces times five graphs handled in 20 separate individualized routines taking up upwards of 200 lines of code. Instead of two customizable loops of eight lines of code.
I posted a link with some best practices (from an AI, I haven't created my own BP yet), however it sounds like you are creating a rather short script that you can probably do yourself. Most of the online AIs are not going to be familiar with all aspects of Merlin or they'll make assumptions--they're about 75% right. Claude seems to be the best at any coding task, short or long. What's helpful is that if you can give the AI an example of the scripts that already exist either by uploading what you have on hand or providing a link to at least one that you like. If you get into a larger projects it helps to provide a few copies of example scripts. Of course the AI is not going to be familiar with installed entware packages, existing aliases and other system information that are specific to Merlin.

In reality, especially if you are proficient at coding Merlin scripts, then you can likely do most things faster than working with the AI. This is due to the fact that you have to gather resources, be sure to explain clearly, and understand what the AI is looking for. Hint: If you have a prompt to configure, or during the first parts of your conversation, set a prompt, which a prompt is best looked at (IMO) as similar to a job description or what you want the AI to do.

My best advice if you are proficient in Merlin development already is to look at the AI as a second set of eyes, teammate, and/or for QA type tasks--set a prompt as described above. Maybe just start by loading in a project and asking what it thinks, then as you release versions you might find that the AI is helpful between releases as it learns about the project. You'll also learn more about using the AI, like connecting to GitHub etc, allowing it to search, etc. Someone that has little knowledge of Merlin is likely to fail to even get an AI generated script to run--so using AI to create scripts is likely an intermediate (and up) collaboration with the free AI resources currently available. This wasn't my first go at creating my own custom script, modifying a script, or similar for Merlin, which I did a little of before AI came along.

All this being said, I haven't made use of anything more than the chatbots, including Gemini pro that you can use for free with an API key. So none of the stuff that's specifically built for coding and can crank apps out based on simple ideas for Android/Iphone --like Gemini Code Assistant. Gemini pro is currently (as of the last couple of months) ranked pretty high among the online models, might be top currently....
 
This was for Openwrt. So far my forays into AI have led me to believe that it is always helpful, and always wrong in some way. I was a ltittle dismayed yesterday that Gemini asked me which of two versions of a response I liked better. Both were wrong, so I got to choose the lesser of two weevils, and Gemini must now think one was right. Post-training tripe.
 
This was for Openwrt. So far my forays into AI have led me to believe that it is always helpful, and always wrong in some way. I was a ltittle dismayed yesterday that Gemini asked me which of two versions of a response I liked better. Both were wrong, so I got to choose the lesser of two weevils, and Gemini must now think one was right. Post-training tripe.
Yeah it's only about 75% right. What I mean by that is you get an answer for a simple question and it might be right almost 100% of the time. When you start asking more complex questions (building code etc), you'll notice small mistakes (if you have previous subject knowledge or fact check), however it will be mostly right. Oftentimes what I find is sometimes even when I tell it something, it will reference what I've said wrong. I'll then go back, make sure I was right, let the AI know what I said and then it will apologize for making the mistake and correct it. 6 months ago I may not have been asking it as hard of a question at times and the models I was working with weren't the best at acknowledging mistakes--so this is something they've started fixing on cloud models as well as many other issues over the last few months making AI more viable.

If you are using Gemini my suggestion is to run anythingLLM Desktop (or docker if you're familiar and want access from multiple systems), where you can get the Gemini API key and for free run the paid for online models--currently. Within Anything LLM you can adjust model parameters to make it more or less creative (temperature), add more certainty, give the model search capabilities that proxy through your PC, and enable RAG (documents that you feed the system). AnythingLLM for me simplifies all these things. Some of this is already available using the regular Google tools, like you can point it to your drive, email etc. Anythingllm gives you a little more granular management on a per chat basis and you don't have to run the models locally, although you have the option to do so.

For local AI AnythingLLM is sort of a front end for me, then you need what I refer to as a runner like Ollama, llama.cpp, or LM studio. I was using Ollama as a runner. It's easy to setup, however they haven't developed the support for AMD graphics like I'd hoped. So I've switched to LM studio and I find it's quite convenient, probably has too many options for me currently.

Even with the 8b models, you're getting pretty good accuracy compared to most cloud models you don't lose a lot of accuracy with more parameters and those cloud models are typically over 100B, some go as far as 1000B or higher. What I've found here is that the more training data doesn't necessarily mean a better model, especially with distilled models. Sort of like the more Ghz on a processor doesn't always translate to better performance. As a rule of thumb, if you have a smaller GPU or just an APU, you can use the smaller 4B models (these will run locally on CPU) and they have some for phones that are under 1B... The rule of thumb is:
4B>Highschool Level
8B> College Level
14B>Graduate Level
Then anything above that I would say is capable of being a professional of some sorts. What you'll find as you go up is less mistakes, slightly better capabilities, and maybe less hallicinatuons (not always). I'm interested more in running them locally, we don't need all these datacenters, tracking, etc. 😈. This is part of how we get RAM prices back to normal, get rid of all the tracking & all the other issues we're seeing. Leave some folks holding the bag....
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Back
Top