gpt4all
gpt4all copied to clipboard
Ability to undo/edit previous request, response
Feature request
Have ability to
- edit last request, to get a better response quality
- edit previous request/response to tune response quality
- select and delete previous request/response to free up unwanted context to increase response quality.
Motivation
I quite like gpt4all because it is easy to setup and "Just works" without complex python setup and complex LLM configs
However one of the limitation of GPT4 compares to LM studio and koboldcpp is the ability to edit previous Request/Response context.
The reasons, is to get good quality response, sometimes I need to remove/tweak previous request/response in middle of context.
Also sometimes if my last prompt didn't get a good response, I want to edit my last prompt to get a quality output
Your contribution
I can provide feedback
I think a similar request has been made here:
- #1150
Maybe also earlier ones. I'll update this comment if I find more.
Your first request looks like it's doable because it'll only affect one output and input. It'll have to go back to before the change and process everything again, in any case.
However, if you edit the conversation history in another way, then it wouldn't be "in sync" with the model anymore. So I'm not sure about 2 & 3. (Assuming 2 means going back more than one request/response.)
Looking into this feature currently, as I consider it necessary functionality for my workflow. I'm considering merkle trees (same structure git uses), and won't need larger contexts since it will recalculate anyhow. If something else is desired behaviour wise, I'm amenable to that as well. Please advise.
edit: for clarity, I'm specifically talking about adding the functionality myself, but I don't want to implement it in an undesired way, hence my request for advising.
I'm considering merkle trees (same structure git uses), and won't need larger contexts since it will recalculate anyhow
That's probably overkill - llama-cpp-python (which ooba's TGWUI uses) just caches the prompt, looks for what changed, and then only decodes the new part - the whole previous conversation is submitted to llama-cpp-python every time it changes. It also implements caching of previous prompts, either in-memory or on-disk, but TGWUI doesn't use it and I haven't personally found it necessary.
I think we should work towards what llama-cpp-python does.
Click the edit button. then change the text. When editing a User Prompt the text can be resubmitted for a new reply. When editing an Assistant Reply the reply is simply saved and entered. Include an option to delete the record opposite from and or within the edit menu to prevent accidental deletion.
I dare to claim: Adding this feature would improve the data quality which is sent to the datalake tremendously!
I really think this issue should be part of the roadmap or at least labeled as medium / high priority feature.