GraphGPT icon indicating copy to clipboard operation
GraphGPT copied to clipboard

Better prompts with mini DSL for larger graphs.

Open varunshenoy opened this issue 1 year ago • 1 comments

A major issue in the initial GraphGPT project is that the entire state of the graph is updated by the LLM.

Even though this fine for an initial prototype, this idea has many drawbacks:

  • The current state of the graph is part of the prompt and the new entire new state will be part of the output. This is a massive waste of tokens and is unscalable.
  • Another issue with the increasing size of the prompt is that you will start to hit MAX_TOKENS and context window limit. Your graph has a clear upper bound on size and complexity.
  • LLMs can sometimes slightly mess up JSON output, leading to parse errors in the client.

Instead, we want to create a prompt that outputs instructions in how to update a graph whose state is stored on the client side. This is cheaper in terms of tokens, since we're relying on the client to manage state rather than the prompt, and faster, since there are less tokens to stream back. We also get more flexibility, as we can now define unique instructions that we can parse on the client side without worrying about GPT's ability to understand it.

We are also now limited by the client's browser memory rather than MAX_TOKENS or context window limitations for GPT-3. GraphGPT can now handle much larger graphs.

Update: apparently this mini DSL is called a semantic triple!

varunshenoy avatar Feb 06 '23 04:02 varunshenoy