What's new

ASUS GT-BE19000_AI / GT-BE96_AI Concerns: Synaptics SL1680 NPU Incompatibility (Ollama & Frigate)

Rabbit-Spec

Occasional Visitor
Hi everyone,

I recently upgraded to the ASUS GT-BE19000_AI / GT-BE96_AI, the current flagship in the ASUS router lineup. While the networking side (Wi-Fi 7) is impressive, I’m finding the "AI Board" platform to be quite disappointing due to significant compatibility issues with its Synaptics SL1680 SoC.

Despite the marketing focus on its 7.9 TOPS NPU, the software ecosystem seems extremely closed, making it nearly impossible for enthusiasts to utilize that power. Here are the specific issues I've encountered:


1. Ollama / LLM Inference Issue:
I tried deploying Ollama on the AI Board. As it turns out, Ollama (and llama.cpp) cannot utilize the Synaptics NPU at all. It falls back entirely to the Quad-core A73 CPU, which immediately hits 100% load.

The root cause seems to be the proprietary Synaptics SyNAP driver framework. Since popular inference engines don't support SyNAP natively, this NPU is essentially "dead weight" for local LLMs.

ee6a0be4b49222bd05ad2fd55f8f4dc7.PNG


2. Frigate NPU Acceleration:
Even the native Frigate container provided within the ASUS AI Board interface fails to utilize the NPU for object detection. It appears the integration between the containerized environment and the SyNAP hardware layer is either broken or incomplete.

3. The Choice of SoC (SL1680 vs. RK3588):
In hindsight, if ASUS had chosen the Rockchip RK3588, the community support (via rknpu) would have allowed seamless integration with Frigate, Home Assistant, and various LLM backends. By using the SL1680, we are stuck in a "walled garden" dependent on Synaptics' niche SDK.

My doubts:
Is there any word from ASUS or Synaptics regarding more open drivers or a more robust Docker integration for the NPU?

If even small models (1B-3B) cannot run with NPU acceleration, this router's practice of touting itself as "AI" is nothing short of deception for enthusiasts!

Looking forward to any insights or workarounds!

~~~

2026.02.22
I have new information: Frigate has added support for Synaptics NPUs in version 0.17, which is currently at the RC3 preview stage. I hope ASUS will integrate this into the AI Board as soon as possible after the official release of Frigate v0.17.
 
Last edited:
s there any word from ASUS or Synaptics regarding more open drivers or a more robust Docker integration for the NPU?
Asus hasn't shared anything outside of their own FAQs. I recommend you contact them directly regarding this. For instance, Frigate is supposed to be leveraging the NPU for image recognition - if it doesn't then it should be reported as a bug.
 
Asus hasn't shared anything outside of their own FAQs. I recommend you contact them directly regarding this. For instance, Frigate is supposed to be leveraging the NPU for image recognition - if it doesn't then it should be reported as a bug.
Thanks for your reply!

I have already reported these issues to ASUS technical support. The main reason I’m posting here is that there is currently a significant lack of community discussion and documentation regarding the ASUS AI Board and its underlying hardware.

As you mentioned, if even the pre-installed Frigate container isn't leveraging the NPU correctly, it’s a major oversight. I’m hoping to bring more attention to these "black box" hardware issues by sharing them here.
 
Listen guys, don't rush into the Wi-Fi 7 (or future Wi-Fi 8) hype just yet. Being an early adopter often means you're just a high-paying beta tester for the manufacturers.

The latest flagships like the GT-BE96 are suffering from major software gaps—NPUs that don't work, driver compatibility issues, and unpredictable thermal spikes. You're paying premium prices just to deal with bugs that ruin your experience.

Stick with high-end Wi-Fi 6 / 6E models (like the GT-AX11000 Pro or RT-AX86U Pro). The firmware is rock-solid, the drivers are mature, and the performance is consistent. Don't waste your money and sanity on 'bleeding-edge' tech that isn't ready for prime time. Stability is true luxury.
 
Are you guys New Around Here twins or something? GT-BE19000AI is the only ASUS AI model I can find.
Haha, not twins! I actually just saw his thread as well—it seems we are both struggling with the exact same hardware limitations on this new platform.

To clarify the model name: I am based in China. Since the 6GHz band is not yet open for civilian use here, ASUS released the GT-BE96_AI as a regional flagship. It is essentially the GT-BE19000_AI (the model you found) but with the 6GHz radio/support removed or disabled to comply with local regulations.

The GT-BE96_AI (Mainland China version) features a 2.4GHz + 5.2GHz + 5.8GHz tri-band configuration. This is different from the GT-BE19000_AI, which uses the standard 2.4GHz + 5GHz + 6GHz setup.

Under the hood, they share the same Synaptics SL1680 SoC and the "AI Board" architecture, which is where these software compatibility issues are stemming from.
 
Hi everyone,

I recently upgraded to the ASUS GT-BE96_AI, the current flagship in the ASUS router lineup. While the networking side (Wi-Fi 7) is impressive, I’m finding the "AI Board" platform to be quite disappointing due to significant compatibility issues with its Synaptics SL1680 SoC.

Despite the marketing focus on its 7.9 TOPS NPU, the software ecosystem seems extremely closed, making it nearly impossible for enthusiasts to utilize that power. Here are the specific issues I've encountered:


1. Ollama / LLM Inference Issue: I tried View attachment 70315deploying Ollama on the AI Board. As it turns out, Ollama (and llama.cpp) cannot utilize the Synaptics NPU at all. It falls back entirely to the Quad-core A73 CPU, which immediately hits 100% load.


The root cause seems to be the proprietary Synaptics SyNAP driver framework. Since popular inference engines don't support SyNAP natively, this NPU is essentially "dead weight" for local LLMs.





2. Frigate NPU Acceleration: Even the native Frigate container provided within the ASUS AI Board interface fails to utilize the NPU for object detection. It appears the integration between the containerized environment and the SyNAP hardware layer is either broken or incomplete.

3. The Choice of SoC (SL1680 vs. RK3588): In hindsight, if ASUS had chosen the Rockchip RK3588, the community support (via rknpu) would have allowed seamless integration with Frigate, Home Assistant, and various LLM backends. By using the SL1680, we are stuck in a "walled garden" dependent on Synaptics' niche SDK.

My Questions to the Community:
Is there any word from ASUS or Synaptics regarding more open drivers or a more robust Docker integration for the NPU?

If we can't even run small models (1B-3B) with NPU acceleration, the "AI" branding on this router feels like a missed opportunity for the power-user community.

Looking forward to any insights or workarounds!
Wi-Fi 7 needs 2 to 3 more years to mature. Early adopters can experiment now, but for a stable experience, Wi-Fi 6 and 6E remain the best choices.

"Wi-Fi 7 is like a concept supercar: stunning on paper, but a headache to daily drive on today's roads. Stick with Wi-Fi 6/6E if you want a reliable 'daily driver'."
 
@Rabbit-Spec I'm afraid this is probably way above my level of difficulty, but I'd be happy to try things if instructed on how. That screenshot in your first post does not look like the Portainer interface we are forced to set up. Are you also required to set up the Ai board with Portainer integrated with Docker?
 
BTW- I saw that there's both a Docker Server and Client in your screenshot. Not sure why both are running? (Sorry if my question is naive)...
Docker_Server_Client_question.jpg
 

7000 RMB is converted to around ~ USD $1017 or ~CAD $1388 ouch. Personally, I'm more interested in Mediatek's RT-BE59 / PRT-BE5000, GS-BE7200X and ZenWiFi BT6/BT8 for Openwrt.
Dual 5 GHz is more logical for my household than dual 6 GHz. Wonder if the language can be changed to English, as Chinese is a non-starter...

Been watching the Banana Pi BPI R4-Pro for a while now. $36x is a little pricey for me for its current state of development...
 
Last edited:
@Rabbit-Spec I'm afraid this is probably way above my level of difficulty, but I'd be happy to try things if instructed on how. That screenshot in your first post does not look like the Portainer interface we are forced to set up. Are you also required to set up the Ai board with Portainer integrated with Docker?
Hi, Portainer is indeed available on the AI Board, but as a user in China, its all-English interface is simply not user-friendly for me. So I deployed another Docker management container called "DPanel" through Portainer. I then deployed Ollama on DPanel.

If you deploy Ollama via Portainer, I believe the outcome should be the same. Ollama cannot invoke the NPU for acceleration, so the model can only run on the CPU.
 
Dual 5 GHz is more logical for my household than dual 6 GHz. Wonder if the language can be changed to English, as Chinese is a non-starter...

Been watching the Banana Pi R4-Pro for a while now. $36x is a little pricey for me for its current state of development...
For the $899.99 price point, the GT-BE19000_AI with full-speed 2.4GHz + 5.2GHz + 6GHz support is a better choice. The 6GHz band with 320MHz bandwidth represents a significant upgrade for Wi-Fi 7.

My GT-BE96_AI can change the firmware language to English in the web backend.
 
Hi, Portainer is indeed available on the AI Board, but as a user in China, its all-English interface is simply not user-friendly for me. So I deployed another Docker management container called "DPanel" through Portainer. I then deployed Ollama on DPanel.

If you deploy Ollama via Portainer, I believe the outcome should be the same. Ollama cannot invoke the NPU for acceleration, so the model can only run on the CPU.
That's kind of unfortunate because you've got a container inside another container. I spent hours trying to update Portainer, and luckily they have a 3 node free introductory rate so I signed up for that and have updated it twice now.

Ollama seems like an Ai development environment? I would be happy to try whatever it was you were running to get that test result with maxed out CPU utilization. (I'm afraid my coding skills did not go past CS201/college sophomore level).
 
For the $899.99 price point, the GT-BE19000_AI with full-speed 2.4GHz + 5.2GHz + 6GHz support is a better choice. The 6GHz band with 320MHz bandwidth represents a significant upgrade for Wi-Fi 7.

My GT-BE96_AI can change the firmware language to English in the web backend.
Since my son moved out of the house end of May last year, I have but one WiFi 7/6 GHz OnePlus Open. It's the only 6 GHz capable device. But thank you for reminding me the 5 GHz does not get 320 MHz bandwidth. So far the maximum I've seen is 240 MHz (TP-Link and Netgear).

Also thank you for clarifying that English is available. I've tried ordering from Temu, no English. I couldn't figure out the correct entries on the CC payment screen! Had to have someone translate for me....
 
Since my son moved out of the house end of May last year, I have but one WiFi 7/6 GHz OnePlus Open. It's the only 6 GHz capable device. But thank you for reminding me the 5 GHz does not get 320 MHz bandwidth. So far the maximum I've seen is 240 MHz (TP-Link and Netgear).

Also thank you for clarifying that English is available. I've tried ordering from Temu, no English. I couldn't figure out the correct entries on the CC payment screen! Had to have someone translate for me....
Portainer and DPanel are management panels, not the Docker engine itself.

They don’t create a 'container within a container' (Nested Containers/DinD). Instead, they typically use 'DooD' (Docker-outside-of-Docker) by mounting the /var/run/docker.sock. This allows the panel to communicate with the host's Docker daemon. When you deploy a container via these panels, that container runs natively on the host's engine, side-by-side with the panel container, not inside it."
 
That's kind of unfortunate because you've got a container inside another container. I spent hours trying to update Portainer, and luckily they have a 3 node free introductory rate so I signed up for that and have updated it twice now.

Ollama seems like an Ai development environment? I would be happy to try whatever it was you were running to get that test result with maxed out CPU utilization. (I'm afraid my coding skills did not go past CS201/college sophomore level).
Not exactly a development environment. Ollama is more of a local AI inference engine or a runtime for LLMs. Think of it like Docker, but specifically for AI models. It handles downloading, running, and managing local models (like Llama 3 or Mistral) through a simple CLI or API, but it's not where you actually write your code.
 
Asus hasn't shared anything outside of their own FAQs. I recommend you contact them directly regarding this. For instance, Frigate is supposed to be leveraging the NPU for image recognition - if it doesn't then it should be reported as a bug.
I have new information: Frigate has added support for Synaptics NPUs in version 0.17, which is currently at the RC3 preview stage. I hope ASUS will integrate this into the AI Board as soon as possible after the official release of Frigate v0.17.
 

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Back
Top