Rabbit-Spec
Occasional Visitor
Hi everyone,
I recently upgraded to the ASUS GT-BE19000_AI / GT-BE96_AI, the current flagship in the ASUS router lineup. While the networking side (Wi-Fi 7) is impressive, I’m finding the "AI Board" platform to be quite disappointing due to significant compatibility issues with its Synaptics SL1680 SoC.
Despite the marketing focus on its 7.9 TOPS NPU, the software ecosystem seems extremely closed, making it nearly impossible for enthusiasts to utilize that power. Here are the specific issues I've encountered:
1. Ollama / LLM Inference Issue:
I tried deploying Ollama on the AI Board. As it turns out, Ollama (and llama.cpp) cannot utilize the Synaptics NPU at all. It falls back entirely to the Quad-core A73 CPU, which immediately hits 100% load.
The root cause seems to be the proprietary Synaptics SyNAP driver framework. Since popular inference engines don't support SyNAP natively, this NPU is essentially "dead weight" for local LLMs.
2. Frigate NPU Acceleration:
Even the native Frigate container provided within the ASUS AI Board interface fails to utilize the NPU for object detection. It appears the integration between the containerized environment and the SyNAP hardware layer is either broken or incomplete.
3. The Choice of SoC (SL1680 vs. RK3588):
In hindsight, if ASUS had chosen the Rockchip RK3588, the community support (via rknpu) would have allowed seamless integration with Frigate, Home Assistant, and various LLM backends. By using the SL1680, we are stuck in a "walled garden" dependent on Synaptics' niche SDK.
My doubts:
Is there any word from ASUS or Synaptics regarding more open drivers or a more robust Docker integration for the NPU?
If even small models (1B-3B) cannot run with NPU acceleration, this router's practice of touting itself as "AI" is nothing short of deception for enthusiasts!
Looking forward to any insights or workarounds!
~~~
2026.02.22
I have new information: Frigate has added support for Synaptics NPUs in version 0.17, which is currently at the RC3 preview stage. I hope ASUS will integrate this into the AI Board as soon as possible after the official release of Frigate v0.17.
I recently upgraded to the ASUS GT-BE19000_AI / GT-BE96_AI, the current flagship in the ASUS router lineup. While the networking side (Wi-Fi 7) is impressive, I’m finding the "AI Board" platform to be quite disappointing due to significant compatibility issues with its Synaptics SL1680 SoC.
Despite the marketing focus on its 7.9 TOPS NPU, the software ecosystem seems extremely closed, making it nearly impossible for enthusiasts to utilize that power. Here are the specific issues I've encountered:
1. Ollama / LLM Inference Issue:
I tried deploying Ollama on the AI Board. As it turns out, Ollama (and llama.cpp) cannot utilize the Synaptics NPU at all. It falls back entirely to the Quad-core A73 CPU, which immediately hits 100% load.
The root cause seems to be the proprietary Synaptics SyNAP driver framework. Since popular inference engines don't support SyNAP natively, this NPU is essentially "dead weight" for local LLMs.
2. Frigate NPU Acceleration:
Even the native Frigate container provided within the ASUS AI Board interface fails to utilize the NPU for object detection. It appears the integration between the containerized environment and the SyNAP hardware layer is either broken or incomplete.
3. The Choice of SoC (SL1680 vs. RK3588):
In hindsight, if ASUS had chosen the Rockchip RK3588, the community support (via rknpu) would have allowed seamless integration with Frigate, Home Assistant, and various LLM backends. By using the SL1680, we are stuck in a "walled garden" dependent on Synaptics' niche SDK.
My doubts:
Is there any word from ASUS or Synaptics regarding more open drivers or a more robust Docker integration for the NPU?
If even small models (1B-3B) cannot run with NPU acceleration, this router's practice of touting itself as "AI" is nothing short of deception for enthusiasts!
Looking forward to any insights or workarounds!
~~~
2026.02.22
I have new information: Frigate has added support for Synaptics NPUs in version 0.17, which is currently at the RC3 preview stage. I hope ASUS will integrate this into the AI Board as soon as possible after the official release of Frigate v0.17.
Last edited: