shell_gpt
shell_gpt copied to clipboard
Don't use cache when the same query is run in succession
In this scenario, the user is most likely not satisfied with the answer, and wants to regenerate.
Fixes #95
How does this interact with --chat? Does it overwrite or append the last message in the chat cache?
How does this interact with --chat? Does it overwrite or append the last message in the chat cache?
The chat cache won't be affected, because it's managed by a separate ChatCache class and stored in a separate directory. And in chat mode, if you run the same query repeatedly, since the chat history is prepended to the query, the Cache class will cache miss every time, and each query will generate a new cache item in /tmp/shell_gpt/cache.
The naming of ChatCache is a bit misleading in this regard. It's not really a cache, but a chat history storage.
And do we have support to flush this Cache/history?
And do we have support to flush this Cache/history?
For now you can delete /tmp/shell_gpt. A cmdline option can be added for this, but that's out of scope of this pull request.
Thank you for the PR.
I find these changes a bit confusing. If I'm using the --cache
option and it doesn't actually cache on the second run, then there is no caching, correct? Additionally, if a user is unsatisfied with the results from the GPT model, it may be a good idea to adjust the --temperature
and --top-probability
settings. I would suggest solve this by adding extra details to readme for these cases.