ragflow
ragflow copied to clipboard
How to support custom embedding and llm?
Describe your problem
Inside two issues I saw something related to support for custom large models One of them says to start it with ollma The other says to use the script python rag/llm/rpc_server.py --model_name QWen-14B-chat (your local LLM) But it doesn't even say what to do with Embedding in the code to customise it.
Also If I use python rag/llm/rpc_server.py --model_name ChatGLM3-6b So what is the interface to this started service? How is my client going to call him?