What's new

ASUS GT-BE96_AI Concerns: Synaptics SL1680 NPU Incompatibility (Ollama & Frigate)

Rabbit-Spec

New Around Here
Hi everyone,

I recently upgraded to the ASUS GT-BE96_AI, the current flagship in the ASUS router lineup. While the networking side (Wi-Fi 7) is impressive, I’m finding the "AI Board" platform to be quite disappointing due to significant compatibility issues with its Synaptics SL1680 SoC.

Despite the marketing focus on its 7.9 TOPS NPU, the software ecosystem seems extremely closed, making it nearly impossible for enthusiasts to utilize that power. Here are the specific issues I've encountered:


1. Ollama / LLM Inference Issue: I tried
ee6a0be4b49222bd05ad2fd55f8f4dc7.PNG
deploying Ollama on the AI Board. As it turns out, Ollama (and llama.cpp) cannot utilize the Synaptics NPU at all. It falls back entirely to the Quad-core A73 CPU, which immediately hits 100% load.


The root cause seems to be the proprietary Synaptics SyNAP driver framework. Since popular inference engines don't support SyNAP natively, this NPU is essentially "dead weight" for local LLMs.





2. Frigate NPU Acceleration: Even the native Frigate container provided within the ASUS AI Board interface fails to utilize the NPU for object detection. It appears the integration between the containerized environment and the SyNAP hardware layer is either broken or incomplete.

3. The Choice of SoC (SL1680 vs. RK3588): In hindsight, if ASUS had chosen the Rockchip RK3588, the community support (via rknpu) would have allowed seamless integration with Frigate, Home Assistant, and various LLM backends. By using the SL1680, we are stuck in a "walled garden" dependent on Synaptics' niche SDK.

My Questions to the Community:
Is there any word from ASUS or Synaptics regarding more open drivers or a more robust Docker integration for the NPU?

If we can't even run small models (1B-3B) with NPU acceleration, the "AI" branding on this router feels like a missed opportunity for the power-user community.

Looking forward to any insights or workarounds!
 
s there any word from ASUS or Synaptics regarding more open drivers or a more robust Docker integration for the NPU?
Asus hasn't shared anything outside of their own FAQs. I recommend you contact them directly regarding this. For instance, Frigate is supposed to be leveraging the NPU for image recognition - if it doesn't then it should be reported as a bug.
 
Asus hasn't shared anything outside of their own FAQs. I recommend you contact them directly regarding this. For instance, Frigate is supposed to be leveraging the NPU for image recognition - if it doesn't then it should be reported as a bug.
Thanks for your reply!

I have already reported these issues to ASUS technical support. The main reason I’m posting here is that there is currently a significant lack of community discussion and documentation regarding the ASUS AI Board and its underlying hardware.

As you mentioned, if even the pre-installed Frigate container isn't leveraging the NPU correctly, it’s a major oversight. I’m hoping to bring more attention to these "black box" hardware issues by sharing them here.
 

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Back
Top