used ollama but API key required.
but got chatting error as following:
ERROR: Invalid API-key provided. Please set LLM API-Key in 'User Setting -> Model Providers -> API-Key'
Current Repo: ragflow Commit Id: 89004f1 Operating system: Ubuntu 24.04 (Kernel version: 6.8.0-39-generic) CPU Type: x86_64 Memory: 62Gi Docker Version: 24.0.7, Python Version: 3.9.19
any database cache related, tried many times of configure operation before.
hi @yangboz thanks for your mention .If you are adding a local model, such as one added via ollama, the api_key can be left empty. We will address this in a future update.
but this round is ERROR: Model(qwen-plus) not authorized
but this round is ERROR: Model(qwen-plus) not authorized
actually ollama model added is llama2.
maybe here Model(qwen-plus) is default select's option? and can not be deselected even select to null?
This list displays all models, with successfully added models highlighted. If you successfully add a model using Ollama, you can scroll down with your mouse wheel to find it or type "ollama" in the selection bar to locate it.like this
This list displays all models, with successfully added models highlighted. If you successfully add a model using Ollama, you can scroll down with your mouse wheel to find it or type "ollama" in the selection bar to locate it.like this
-J3rENK9UeaMul2VA4gcaS4A_SoiJDuC6fhrRqZOCss">
but then after selecting ollama/llama2 got blank content as uploaded .
This appears to be a browser rendering issue. Refreshing the browser might resolve the problem.
This appears to be a browser rendering issue. Refreshing the browser might resolve the problem.
how come? before chatting, user should select database, with model option , dialog option during those interactive steps may happen or results to it. right ?
main ip:192.168.0.122 other ip: 192.168.0.100 when ip100 request ip122 , error invalid api-key provided. but I use ollama model!
actually ollama model added is llama2.