ragflow
ragflow copied to clipboard
Integrates more models
This issue is used to document the LLM, embedding, reranker, etc. models that need to be integrated with RAGFlow.
- [x] Azure Open AI service
- [x] Google Gemini #1036
- [x] Mixtral AI #433
- [x] Together.AI #1890
- [x] Cohere model #367
- [x] AWS Bedrock models #308
- [x] Baichuan AI #934
- [x] 01.AI #1951
- [x] Wenxin
- [x] Minimax
- [x] BCE embedding model #326
- [x] Jina embedding models #650
- [x] GPT4o #775 (Only text now)
- [x] VolcEngine #885
- [x] SiliconFlow #1926
- [x] Novita.ai #1910
- [x] Upstage #1902
- [x] GPT4o-mini #1827
- [x] Cohere #1849
- [x] Step #1686 #1751
Request to support Qwen-max. Can I modify the code?
@OXOOOOX We intend to create an international community, so we encourage using English for communication.
Yes, you can modify the code and submit a PR. We will merge it into the code base.
Would you pls add llama-3.1-70b-versatile and llama-3.1-8b-instant for Groq , for now is only llama3.0
Thank you
#1853