ragflow icon indicating copy to clipboard operation
ragflow copied to clipboard

How to support custom embedding and llm?

Open ciaoyizhen opened this issue 1 year ago • 0 comments

Describe your problem

Inside two issues I saw something related to support for custom large models One of them says to start it with ollma The other says to use the script python rag/llm/rpc_server.py --model_name QWen-14B-chat (your local LLM) But it doesn't even say what to do with Embedding in the code to customise it.

Also If I use python rag/llm/rpc_server.py --model_name ChatGLM3-6b So what is the interface to this started service? How is my client going to call him?

ciaoyizhen avatar Apr 16 '24 06:04 ciaoyizhen