Does Graphiti Support DeepSeek LLM?
Hello Graphiti Team! 👋
I'm exploring integrating alternative models and would appreciate your guidance on two aspects:
-
DeepSeek LLM Support
- Does Graphiti officially support DeepSeek language models?
- If yes, would replacing
OpenAIClient's config withmodel="deepseek-chat"be the correct approach? - Are there any working examples available?
-
HuggingFace Embeddings
- For embedding models, how should we properly replace the default with HuggingFace open-source models (e.g.,
intfloat/multilingual-e5)? - Would you have a concrete example of configuring this through
GraphitiCoreinitialization?
- For embedding models, how should we properly replace the default with HuggingFace open-source models (e.g.,
Thank you for your time and for building such a valuable tool. Looking forward to your insights.
Hey,
What service are you using for DeepSeek? If you are using something like Ollama to deploy it locally you should be able to deploy it to an openAI compatible endpoint. Then you can just use the openai_generic_client and set the model name to your model (the deploy should also give you the necessary API key).
If you are using a third-party API service to use DeepSeek, we support that as well. We have a groq client for example, but you can also use any openAI compatible API with the above generic client. If the service you use isn't supported, we can also help you submit a new llm_client for your use case.
For the huggingface embeddings, we don't actually have an example of a client for that which is an oversight on our part (we actually use HuggingFace models for our own deployment of Graphiti as part of Zep). I will work on adding that to Graphiti today though. In the meantime, you can look at how we implement an HF model for the cross-encoder and do something similar for the embedder: graphiti_core/cross_encoder/bge_reranker_client.py.
有没有实现deepseek接入啊 我失败了 使用deepseekAPI