LightRAG
LightRAG copied to clipboard
[Question]:Do you have plans to make it possible to use Huggingface's embeddings or LLM models?
Do you need to ask a question?
- [ ] I have searched the existing question and discussions and this question is not already answered.
- [ ] I believe this is a legitimate question, not just a bug or feature request.
Your Question
No response
Additional Context
There are models I'd like to try, but they're only available in Hugging Face; the current environment variables only allow the use of models available in Ollama.
PRs are welcome. Alternatively, deploying a liteLLM proxy server is an option, which can translate Hugging Face's API into OpenAI-compatible endpoints.
The issue with LiteLLM is that introduces another layer of failures (timeouts are easy to get for instance) A native integration would be nice