[Feature Request]: An example to run conversations with GraphRAG
Is your feature request related to a problem? Please describe.
As of now, we don't have an example to converse with LLM powered by GraphRAG. I see the code in the notebook of the documentation. I've look up the source code and indeed we can pass a history when constructing a context_builder. But we don't have a client yet.
Describe the solution you'd like
Implement a client that given processed knowledge graph, can run conversations with multiple turns.
Additional context
I'd like to help implement this, but I have a few questions:
- How can we determine when to use global search and when to use local search? It seems we also need LLM to determine if a question/query is global or local based on the conversation history.
- How can I find the content of references? References like
[Data: Reports (377, 327, 182)]are inserted into assistant's answer, but these are indices, how can I use these indices to find the content?
This issue has been marked stale due to inactivity after repo maintainer or community member responses that request more information or suggest a solution. It will be closed after five additional days.
About second question:
How can I find the content of references? References like [Data: Reports (377, 327, 182)] are inserted into assistant's answer, but these are indices, how can I use these indices to find the content?
If you open graphrag visualizer tool (https://noworneverev.github.io/graphrag-visualizer/#/data) and import your parquet files, you will see visual representation. And, in the Data tab under under community reports you can search for these entries like 377 under "human_readable_id" to see actual details.
About first question:
How can we determine when to use global search and when to use local search? It seems we also need LLM to determine if a question/query is global or local based on the conversation history.
I also want to know how to do this, is it going to be an extra LLM call to determine which technique to use?
Question: Also, remembering last few conversations/chat history feature, is it something that we need to implement on our side or graphrag framework should be providing that functionality?
GraphRAG doesn't currently have any facility to decide which query method is best for your question. As a general rule, global search is better for high-level thematic questions, and local search is better for questions that reference a specific entity. This is because global search operates over the community summaries, whereas local search starts with a vector match against the entity names to find a node to start from.
Hi, below is a minimal example of how to use conversation history. Are you planning to create a new client which supports conv history? Otherwise, I can help create a notebook based on the below snippet. Let me know your suggestion. Thanks
from graphrag.query.context_builder.conversation_history import ConversationHistory, ConversationRole
from graphrag.query.structured_search.global_search.search import GlobalSearch
from graphrag.query.structured_search.local_search.search import LocalSearch
# ---
# other imports go here
# ---
history = ConversationHistory()
# ---
# insert rest of the code up and until the search engine (either global or local) is defined
# the search engine is called `search_engine()` in this example
# ---
async def run_search(query: str, search_engine: Union[GlobalSearch, LocalSearch], history: ConversationHistory):
# add user query to history
history.add_turn(role=ConversationRole.USER, content=query)
# run the graphrag search
result = await search_engine.search(query, conversation_history=history)
response = result.response
# add graphrag reponse to history
history.add_turn(role=ConversationRole.ASSISTANT, content=response)
return response
# ---
# remaining code goes here, using `run_search`
# ---