Rabbit-Spec
New Around Here
Hi everyone,
I recently upgraded to the ASUS GT-BE96_AI, the current flagship in the ASUS router lineup. While the networking side (Wi-Fi 7) is impressive, I’m finding the "AI Board" platform to be quite disappointing due to significant compatibility issues with its Synaptics SL1680 SoC.
Despite the marketing focus on its 7.9 TOPS NPU, the software ecosystem seems extremely closed, making it nearly impossible for enthusiasts to utilize that power. Here are the specific issues I've encountered:
1. Ollama / LLM Inference Issue: I tried
deploying Ollama on the AI Board. As it turns out, Ollama (and llama.cpp) cannot utilize the Synaptics NPU at all. It falls back entirely to the Quad-core A73 CPU, which immediately hits 100% load.
The root cause seems to be the proprietary Synaptics SyNAP driver framework. Since popular inference engines don't support SyNAP natively, this NPU is essentially "dead weight" for local LLMs.
2. Frigate NPU Acceleration: Even the native Frigate container provided within the ASUS AI Board interface fails to utilize the NPU for object detection. It appears the integration between the containerized environment and the SyNAP hardware layer is either broken or incomplete.
3. The Choice of SoC (SL1680 vs. RK3588): In hindsight, if ASUS had chosen the Rockchip RK3588, the community support (via rknpu) would have allowed seamless integration with Frigate, Home Assistant, and various LLM backends. By using the SL1680, we are stuck in a "walled garden" dependent on Synaptics' niche SDK.
My Questions to the Community:
Is there any word from ASUS or Synaptics regarding more open drivers or a more robust Docker integration for the NPU?
If we can't even run small models (1B-3B) with NPU acceleration, the "AI" branding on this router feels like a missed opportunity for the power-user community.
Looking forward to any insights or workarounds!
I recently upgraded to the ASUS GT-BE96_AI, the current flagship in the ASUS router lineup. While the networking side (Wi-Fi 7) is impressive, I’m finding the "AI Board" platform to be quite disappointing due to significant compatibility issues with its Synaptics SL1680 SoC.
Despite the marketing focus on its 7.9 TOPS NPU, the software ecosystem seems extremely closed, making it nearly impossible for enthusiasts to utilize that power. Here are the specific issues I've encountered:
1. Ollama / LLM Inference Issue: I tried
The root cause seems to be the proprietary Synaptics SyNAP driver framework. Since popular inference engines don't support SyNAP natively, this NPU is essentially "dead weight" for local LLMs.
2. Frigate NPU Acceleration: Even the native Frigate container provided within the ASUS AI Board interface fails to utilize the NPU for object detection. It appears the integration between the containerized environment and the SyNAP hardware layer is either broken or incomplete.
3. The Choice of SoC (SL1680 vs. RK3588): In hindsight, if ASUS had chosen the Rockchip RK3588, the community support (via rknpu) would have allowed seamless integration with Frigate, Home Assistant, and various LLM backends. By using the SL1680, we are stuck in a "walled garden" dependent on Synaptics' niche SDK.
My Questions to the Community:
Is there any word from ASUS or Synaptics regarding more open drivers or a more robust Docker integration for the NPU?
If we can't even run small models (1B-3B) with NPU acceleration, the "AI" branding on this router feels like a missed opportunity for the power-user community.
Looking forward to any insights or workarounds!