graphrag
graphrag copied to clipboard
Questioning the Redundant Implementation of LLM and Embedding in graph.query.llm
Is there an existing issue for this?
- [ ] I have searched the existing issues
- [ ] I have checked #657 to validate if my issue is covered by community support
Describe the issue
The implementation of OpenAI Complete, Chat Complete, and Embedding has already been realized in graphrag.llm; and it has been referenced in graphrag.index's llm. What I don't understand is why graph.query.llm does not reference the implementation in graphrag.llm but instead re-implements llm and embedding integration from scratch. What is the reason for this? To show off? To increase the reading difficulty for users? Or something else?
Steps to reproduce
No response
GraphRAG Config Used
# Paste your config here
Logs and screenshots
No response
Additional Information
- GraphRAG Version:
- Operating System:
- Python Version:
- Related Issues: