graphrag icon indicating copy to clipboard operation
graphrag copied to clipboard

[Feature Request]: An example to run conversations with GraphRAG

Open ifsheldon opened this issue 1 year ago • 4 comments

Is your feature request related to a problem? Please describe.

As of now, we don't have an example to converse with LLM powered by GraphRAG. I see the code in the notebook of the documentation. I've look up the source code and indeed we can pass a history when constructing a context_builder. But we don't have a client yet.

Describe the solution you'd like

Implement a client that given processed knowledge graph, can run conversations with multiple turns.

Additional context

I'd like to help implement this, but I have a few questions:

  1. How can we determine when to use global search and when to use local search? It seems we also need LLM to determine if a question/query is global or local based on the conversation history.
  2. How can I find the content of references? References like [Data: Reports (377, 327, 182)] are inserted into assistant's answer, but these are indices, how can I use these indices to find the content?

ifsheldon avatar Jul 10 '24 08:07 ifsheldon

This issue has been marked stale due to inactivity after repo maintainer or community member responses that request more information or suggest a solution. It will be closed after five additional days.

github-actions[bot] avatar Jul 27 '24 01:07 github-actions[bot]

About second question:

How can I find the content of references? References like [Data: Reports (377, 327, 182)] are inserted into assistant's answer, but these are indices, how can I use these indices to find the content?

If you open graphrag visualizer tool (https://noworneverev.github.io/graphrag-visualizer/#/data) and import your parquet files, you will see visual representation. And, in the Data tab under under community reports you can search for these entries like 377 under "human_readable_id" to see actual details.

About first question: How can we determine when to use global search and when to use local search? It seems we also need LLM to determine if a question/query is global or local based on the conversation history.

I also want to know how to do this, is it going to be an extra LLM call to determine which technique to use?

Question: Also, remembering last few conversations/chat history feature, is it something that we need to implement on our side or graphrag framework should be providing that functionality?

amitt0488 avatar Apr 24 '25 13:04 amitt0488

GraphRAG doesn't currently have any facility to decide which query method is best for your question. As a general rule, global search is better for high-level thematic questions, and local search is better for questions that reference a specific entity. This is because global search operates over the community summaries, whereas local search starts with a vector match against the entity names to find a node to start from.

natoverse avatar Apr 25 '25 21:04 natoverse

Hi, below is a minimal example of how to use conversation history. Are you planning to create a new client which supports conv history? Otherwise, I can help create a notebook based on the below snippet. Let me know your suggestion. Thanks

from graphrag.query.context_builder.conversation_history import ConversationHistory, ConversationRole
from graphrag.query.structured_search.global_search.search import GlobalSearch
from graphrag.query.structured_search.local_search.search import LocalSearch
# ---
# other imports go here
# ---

history = ConversationHistory()

# ---
# insert rest of the code up and until the search engine (either global or local) is defined
# the search engine is called `search_engine()` in this example
# ---

async def run_search(query: str, search_engine: Union[GlobalSearch, LocalSearch], history: ConversationHistory):
    
    # add user query to history
    history.add_turn(role=ConversationRole.USER, content=query)

    # run the graphrag search
    result = await search_engine.search(query, conversation_history=history)
    response = result.response

    # add graphrag reponse to history
    history.add_turn(role=ConversationRole.ASSISTANT, content=response)

    return response

# ---
# remaining code goes here, using `run_search`
# ---

kwkache avatar Apr 30 '25 08:04 kwkache