chat-ollama
chat-ollama copied to clipboard
ChatOllama is an open source chatbot based on LLMs. It supports a wide range of language models, and knowledge base management.
The wrong occurred when I use new llama3.1,and it never occurred when I use other models 
Llama3.1:405b 对话 不工作 : Ollama call failed with status code 500: llama runner process has terminated: error loading model: unable to allocate backend buffer
刚在Ollama本地下载了Llama3.1 :8b, 并在chatollama 对话下运行,反应速度较慢,让它翻译一段英文为中文,出现错误“”
[Use with Docker](https://github.com/sugarforever/chat-ollama?tab=readme-ov-file#use-with-docker) If I want to update, should I rerun this command line? `$ docker compose up` Or use a service like [Watchtower](https://containrrr.dev/watchtower/) instead?
我之前是通过oneapi的次数渠道创建的API Key在Chatollama中使用text-embedding模型处理知识库文件的,那个时候还能用。但是现在Oneapi的次数渠道不支持text-embedding向量模型了,只有额度渠道支持。于是我通过oneapi的额度渠道创建好API Key后尝试在chatollama中使用,发现正常对话是没有问题的。但是当创建知识库,选择text-embedding-large作为向量模型时,chatollama报错cannot read propties of undefined: 后台日志如下: ``` chatollama-1 | Invalid token from Authorization header. chatollama-1 | URL: /api/knowledgebases User: null chatollama-1 | Current User: null chatollama-1 | Created...
就像 langchain 有 js 和 python 版本一样...... 很多小伙伴期待 使用 python 开发,配合 Streamlit / Gradio 搞起来,而不仅仅是 JS 那一套呀。。。。。。 Python 版本顶起来, 加油!!!