[Question]: `KeyError: 'base_url'` when submitting llama3.2 Chat Model
Self Checks
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (Language Policy).
- [x] Non-english title submitions will be closed directly ( 非英文标题的提交将会被直接关闭 ) (Language Policy).
- [x] Please do not modify this template :) and fill in all the required fields.
Describe your problem
When submitting LLM models via the GUI, I am unable to submit:
Model Type: chat Model Name: llama3.2 Base URL: http://host.docker.internal:11434/v1
It shows the error: KeyError: 'base_url'.
However, I was able to submit bge-m3 with the same base URL:
Model Type: embedding Model Name: bge-m3 Base URL: http://host.docker.internal:11434/v1
without any error.
Why?
Hi @Promethean-Pty-Ltd! I'm Dosu and I’m helping the ragflow team.
The KeyError: 'base_url' happens for chat models like llama3.2 because of how the backend and model classes handle the base_url parameter. For chat models, the code expects base_url to be present in the keyword arguments (kwargs), but it's actually passed as a named argument. This mismatch causes the KeyError. Embedding models don't have this requirement, so they work fine with the same base URL.
To fix this, the chat model class (for example, OllamaChat or similar) should be updated to use the base_url parameter directly, not assume it's always in kwargs. You can check other chat model classes for similar patterns and update them as well. This change should resolve the error for the llama3.2 Chat Model and prevent similar issues in the future. For reference, see how the constructors handle base_url in both chat and embedding models.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
Intersting I can not reproduce at my local, can you show how you config at your UI? Thanks
@Promethean-Pty-Ltd Try removing the v1 suffix.
http://host.docker.internal:11434/
Pull the nightly version of docker image again or apply the latest code.
FYI: A clean reinstall resolved this.