OpenGlass icon indicating copy to clipboard operation
OpenGlass copied to clipboard

请问 此处 ollama 应该是写我本地的配置吧

Open labixiaoyuan opened this issue 1 year ago • 8 comments

image

labixiaoyuan avatar Jun 19 '24 07:06 labixiaoyuan

image image 我在代码里加了ollama配置,启动后,还是一如既往的没后续。 image 请问,哪位大神能提供好的建议呢?

labixiaoyuan avatar Jun 19 '24 09:06 labixiaoyuan

你本地 ollama 要有TS 定义的模型,请求接口的地方要改成你本地有的模型

GALA009 avatar Jun 20 '24 01:06 GALA009

接口处已配置了本地模型。按照您的说法,我还需要有 TS模型

labixiaoyuan avatar Jun 27 '24 01:06 labixiaoyuan

I tried to add the API almost the same way, and I was trapped by the almost same problem. The picture shows the way I added the local Ollama API. keyapi

Then I run the Ollama server and start the Openglass web page, connecting the XIAO-ESP32-S3 via BLE. The ESP32-S3 actually passed the pictures to the Ollama server every time it took a picture. But when I tried to type some words in the text box, nothing appeared on the web page. This is the termianl while running the Openglass and Ollama with moondream. terminal

And this is the web page. hand

So I tried to capture the token between the Openglass localhost and the Ollama server with the WireShark tool. I found that the Ollama server definitely processed images and generated the related outputs, but it just could not display any output on the webpage. Here is the output captured by the WireShark. reply

So my question is, what should I do to let the messages display on the chat box of the Web page?

ojeffreyo avatar Jun 29 '24 08:06 ojeffreyo

You may try the 'llava-llama3' on ollama. This is LM with vision.

HaoHoo avatar Jul 05 '24 13:07 HaoHoo

So my question is, what should I do to let the messages display on the chat box of the Web page?

Same question but not the same situation.

I changed the 'imageDescription.ts' to use the 'llava-llama3' model on the local ollama runtime. It also can not get any response from the Expo page by input in the chat area. But I can find the model response from browser development tools.

截屏2024-07-06 14 28 39

When I used Groq API, in the first round I could get model responses, but lost them when the image shows on the page.

HaoHoo avatar Jul 06 '24 06:07 HaoHoo

If you choose the 'moondream' model like the original code, you can use 'ollama pull moondream:1.8b-v2-fp16' to download the model. image And you also can reference the recording demo: https://www.loom.com/share/4c5666ef283f4b33b4705a21c71fc461

HaoHoo avatar Jul 08 '24 04:07 HaoHoo

If you choose the 'moondream' model like the original code, you can use 'ollama pull moondream:1.8b-v2-fp16' to download the model. image And you also can reference the recording demo: https://www.loom.com/share/4c5666ef283f4b33b4705a21c71fc461

Thank you for the material you provided, I will try this today.

ojeffreyo avatar Jul 08 '24 04:07 ojeffreyo