More control over chat history
More control over chat history
- [x] I checked there is no similar issues
Aider now includes the whole chat history in context, and the only way to control context is using /clear which deletes the whole chat history. I think this (as well as the current AI summary of chat history) is not enough.
Allowing more control over what chat history is put into context can save quite a lot tokens and improve response quality (messy context confuses LLM).
There is AI summary now, which is great, but consider this case: A history chat is very important, but it was before some unimportant chat, so the important chat will likely fall into AI summary, while the unimportant ones are raw and verbose, occupies a lot of context and also confuses LLM. Also, some user may prefer manual control over a basic AI summary.
Run /history to open history.md where all the chat history (requests and LLM responses) can be selected or unselected by a markdown checkbox.
For example:
-
[x] QUERY 1 (2025/3/1 01:46:10):
USER INPUT
/code Please implement main.py ...
LLM OUTPUT
OK. I will implement main.py ...
-
[ ] QUERY 2 (2025/3/1 01:48:50):
USER INPUT
/ask I don’t understand. Can you explain the changes in detail? ...
LLM OUTPUT
OK. ...
- [x] **QUERY 1** (2025/3/1 01:46:10): <!-- Checkbox checked by default -->
> USER INPUT
>
> /code Please implement main.py ...
>
> ---
> LLM OUTPUT
>
> OK. I will implement main.py ...
- [ ] **QUERY 2** (2025/3/1 01:48:50): <!-- User can uncheck this, as this explanation uses up lots of token, but have very limited use for future LLM responses -->
> USER INPUT
>
> /ask I don’t understand. Can you explain the changes in detail? ...
>
> ---
> LLM OUTPUT
>
> OK. ...
Maybe only display the first 300 characters for every user input and LLM output, so is easier to read.
Keep a copy of history.md, so if user editted history.md cannot be parsed, ignore the edit and use the copy.
Can anyone just say something? Maybe simply add an enhancement tag? Will this feature be useful? Maybe I can write this and create PR, but I dont want to waste my time writing a feature you will never merge. So please reply.
you could at least try to implement this feature and then you'd discover a max_chat_history_tokens setting option
you could at least try to implement this feature and then you'd discover a
max_chat_history_tokenssetting option
@festeh Yeah I know this setting. If you mean using the summary of long chat history instead, half of my issue above is about why I think the AI summary of long chat history controled by max_chat_history_tokens is not enough.
whoops, sorry for tl;dr. I'm usually fine with /clear command, but that's because I don't like to read much lul. I have no opinion on this feature, but I'd start with a cli tool that manipulates history file outside of aider - that's simpler to implement and then decide how useful it is.
:eyes:
I'm running in to this exact same issue, wanting some of the ease of context management I'm able to easily achieve in Google AI studio.
Should be possible to delete specific messages, edit previous or even the LLM replies. What I can't find to make this happen is the current context file that the LLM recieves eachg turn.
I've been searching high and low, if anyone is familiar with the method through which the current contxt is fed in, where that ellusive file is saved, I'll share whatever I build for the task right here.
I think that Aider definitely needs to give the user more control over context management. I'm not sure about the checklist approach but imo it's at least an interesting idea.
It's gotta be some sort of cache right?
It's gotta be some sort of cache right?
@EleVicted I checked the sqlite db in .aider.tags.cache.v3/ and .aider.tags.cache.v4/, there is no history in it. Which means all history are stored only in .aider.chat.history.md, which makes controlling context much easier, just deal with that markdown file.
Currently the parsing of that history file is in: https://github.com/Aider-AI/aider/blob/5e7ef6c50e58aab3c10c6b26cd38595da3e5a323/aider/coders/base_coder.py#L498 https://github.com/Aider-AI/aider/blob/5e7ef6c50e58aab3c10c6b26cd38595da3e5a323/aider/utils.py#L145-L193 https://github.com/Aider-AI/aider/blob/5e7ef6c50e58aab3c10c6b26cd38595da3e5a323/aider/coders/base_coder.py#L968-L978
I think that Aider definitely needs to give the user more control over context management. I'm not sure about the checklist approach but imo it's at least an interesting idea.
I also vote for more control. Actually --max-chat-history-tokens seems to not work well, see below. Although try to limit the max tokens, with the --restore-chat-history flag, it still run into the exception. Which I think should be a bug
aider --restore-chat-history --max-chat-history-tokens 5000
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Aider v0.84.0
Main model: openai/claude-3.7-sonnet-thought with whole edit format
Weak model: openai/gpt-4o-mini
Git repo: .git with 36 files
Repo-map: using 1024 tokens, auto refresh
Restored previous conversation history.
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
> litellm.BadRequestError: OpenAIException - prompt token count of 102211 exceeds the limit of 64000
litellm.BadRequestError: OpenAIException - prompt token count of 102211 exceeds the limit of 90000
summarizer unexpectedly failed for all models