[User Discussion] Seeking feedback on optimal Aider workflows
Issue
As a user of Aider for the past two weeks, I've found it to be a powerful tool, but I'm curious about how others are using it effectively. I believe we could all benefit from sharing our workflows and strategies. This discussion aims to gather insights from the community that might help improve our individual use of Aider and potentially provide valuable feedback for the Aider documentation.
I'll start by sharing my current workflow, which I've been using exclusively with Claude 3.5 Sonnet on a Flutter project for a mobile app (the programming language in use is thus Dart):
-
Decide on the feature:
- Determine the overall feature I want to be working on
-
Determine sub-functionality scope:
- Trying to find the optimal size of the sub-functionality of the feature
- [Unsure] How to best determine this scope
-
Initiate interaction:
- [Unsure] Whether to "/ask" the model first and then explain what parts of the answer should be implemented, or use the coding mode directly to try to achieve my goal
- If using coding mode, provide a detailed prompt
-
Refine output:
- If the result isn't good, do "/undo" and try again with a more exact prompt
- [Unsure] Whether to use the "/clear" functionality
- Sometimes it picks up on stuff that had been in an earlier prompt where it delivered a blatantly wrong solution
- Previous context can sometimes be helpful
-
Test and error handling:
- If there's a commit, try to see if it runs, if it compiles, or if there are errors and if the change really does what I was trying to achieve
- For complex errors: Put them into Aider
- For simple errors (e.g., missing semicolons, imports): Fix manually
- It's more likely that I can fix these things quickly, especially as they are pretty much being done in Android Studio by the IDE with very little problems
- Commit manual fixes with a "manual" commit message
-
Iterate:
- Continue like this until the feature I'm trying to build is finished
-
Review:
- Mark all the commits in Android Studios that had been done for that feature
- See what has really overall changed
- Either ask Aider to work on some issues that I discovered or fix stuff myself as well
- Continue like that until I'm satisfied with the overall change for the feature I'm working on
-
Finalize:
- Copy all the commit messages that have been made by Aider
- Use the LLM and ask it for an appropriate commit message that sums up all these commit messages as one commit message
- Use the resulting commit message and personalize it to what I think is the appropriate message
- Squash everything that has been done to achieve that feature and in that session with that commit message
- In the end, there's only one commit
Key uncertainties:
- Optimal task scope for Aider
- Whether to use "/ask" first or go directly into coding mode
- Whether to use "/clear" or maintain context
- Best practices for reviewing and refining Aider's output
I'd love to hear about your Aider workflows! Please share:
- Which model(s) you're using
- Your typical project type (e.g., web development, data analysis, etc.)
- Programming languages and frameworks being used
- Your step-by-step process
- Any tips or tricks you've discovered
- Challenges you've faced and how you've overcome them
- How you decide on the scope of tasks to give Aider
- Strategies for reviewing and refining Aider's output
Your insights will be invaluable in helping us all use Aider more effectively. They might also provide useful feedback to consider for the documentation or feature improvements.
Let's learn from each other and optimize our Aider experiences!
Version and model info
No response
- you can use
/committo commit manual changes. - the squashing can in principle be automated using aider too. I ran an experiment of putting the outputs of
git log $base_rev --patchandgit diff $base_rev intofiles and using/askto suggest the squashing and/or unrelated changes, basically "to review the PR". - You can
/addsome kind ofAIDER-README.mdso that aider understands the overall intent of the project, and you can use comments to supply extra context to aider automatically, e.g. JSDoc file overview comments. This is also useful to prevent aider from fixing things it repeatedly wants to fix (e.g. it thinks thatgpt4o-miniis a typo meaninggpt4obecause the model knows nothing about the new developments. - You can ask aider to suggest 2 improvements to a file and then ask to selectively implement them. The same technique can be used when planning.
- For large features an impementation plan can be written by aider to
FOO-PLAN.mdor to the source comments. The big advantage of language models is that they are good at writing extensive plans and documentation. So you can essentially do the classical waterfall-style planning-design-implementation instead of the agile "randomly grow by a single feature at a time". Of course you need to find a balance, but my suggestion is that writing documentation and plans is more affordable now. - I used it for Bash, Ansible, Dockerfile and nodejs/js (non-web). I also did some experiments with Ragel (Only claude-3.5 is capable because of the obscurity of the language)
- I treat LLMs as I would treat humans, according to their capacities. Claude is more capable than gpt4o, but in general I treat them as incapable juniors requiring micromanagement. I have seen humans with gpt4o-like capabilities, so it isn't hard for me. You essentially micromanage a junior, so I guess reading literature on how to manage teams of interchangeable idiots are very helpful to aid developers unfamiliar with the technical management to use LLMs efficiently.
I agree it would be great to have more discussion about people's workflows. I've looked on reddit etc to find a few ideas, but haven't found any great single place for info.
I use aider in WSL
Current methods
- I use VSCode and run aider in a separate terminal with --no-auto-commits. With each change, I use diff mode in VSCode's git tool to review the changes. I use this to pick and choose rather than aider's /undo feature. It's also easier to read code than in the aider terminal interface. I commit every small change so that the diff view works, and I can easily unpick any problems.
- I use ChatGPT and Claude web interface to iterate on small issues to avoid using up tokens with aider. I've already paid the fixed monthly subscription for the web interface so I might as well use it when I can rather than buying more tokens.
- I carefully manage the context window, using /drop and /clear whenever possible.
- I use /ask, sometimes having a discussion then asking it to produce a prompt for what we just discussed.
- I use the /clipboard function to input screenshots of results and debugging sessions
- I mainly use python, with type annotations and good comments/docstrings to help the context.
- I add markdown files with specialist knowledge relating to the area I'm working on, including code examples
- I use Sonnet 3.5, mainly through openrouter to avoid limits
Questions I still have:
- how good is Cursor, is it complementary to aider or an alternative
- I find that the LLM lacks context about some data structures. For example, working with pandas dataframes, and with databases. I get lots of errors relating to not understanding the dimensions and data types it is working with. I've tried to provide this context but I haven't yet got a good solution. I've also had problems with different date data types.
Ideas I want to try
- I haven't yet tried this but I was thinking of having aider edit a design doc iteratively rather than using the chat history to store requirements.
I'm going to close this issue for now, but feel free to add a comment here and I will re-open or file a new issue any time.