• sales

    +86-0755-88291180

Where is the LLM?

2025-11-27 07:53:01 Ask

Hi! Is the prompt sent over to Deepseek cloud model, and the assistant is only managing voice to text and text to speech, or is the AI inference happening on the device itself? Thanks!

LLM inference
2answers
SpotPearGueste93f8
Answer time:
2025-11-27 09:16:29

The device mainly handles audio encoding and decoding locally and sends and receives data with the server. Model inference is not performed locally.

Like0

report

SpotPearGuest58655
Answer time:
2025-12-29 12:51:20

Got it! Is there an ability to set up a different endpoint, like my local LLM server? Thanks!

Like0

report