Is Aider suitable for complex and large-scale projects? (models only support 4k tokens by default)
Issue
I am exploring Aider for use in large, non-trivial codebases. While it works well in small scripts or modular tasks, I just wanted to know about how it scales to more complex projects.
Some challenges I’ve run into include: Token/context limits when feeding in many files (models only support 4k tokens by default) Maintaining correctness when refactoring or fixing logic across multiple interdependent modules
Version and model info
No response
I've been using aider almost since public release, and while it was at the peak for a while, I don't think it is currently viable for a large codebase.
The main issues are that compared to claude and codex (and gemini), its agent-tool orchestration is poor to non-existent. This leaves you the user in charge of deciding what files to add or remove to context, and often what command line tools to use.
It's sort of a v1 vision of a terminal coding assistant - more tightly focused on just assisting the development loop. For those use cases, it's still really good. For a large codebase, there's just a huge amount of sugar built in to the other tools right now that make them much less frustrating.
models only support 4k tokens by default
Are you using ollama? This is not a model or aider limitation, it's an ollama bug. Just switch to llama.cpp or something else that actually works.
Currently using sonet-4 model.
Sonnet supports 200k tokens, it now even seems to be configured in 1M mode (https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json#L568C30-L568C33). I have no idea where your 4k comes from.
@AP-orion what is your usecase? If you are already using Anthropic, why not use the (imho great) claude-code then?
I consider aider for me as something to run on closed source. If this is not your usecase I would like to understand why you have chosen aider + sonet.
@Seikilos I’m using Aider for code debugging. My main focus right now is on the debugging workflow, where Aider runs the code, detects errors, fixes them automatically, and re-runs the script to verify the fix. Initially, I was using the o3-mini model for this, but I’ve now switched to Sonnet 4 for better performance and accuracy.
@Seikilos I’m using Aider for code debugging. My main focus right now is on the debugging workflow, where Aider runs the code, detects errors, fixes them automatically, and re-runs the script to verify the fix. Initially, I was using the o3-mini model for this, but I’ve now switched to Sonnet 4 for better performance and accuracy.
@Seikilos I’m using Aider for code debugging. My main focus right now is on the debugging workflow, where Aider runs the code, detects errors, fixes them automatically, and re-runs the script to verify the fix. Initially, I was using the o3-mini model for this, but I’ve now switched to Sonnet 4 for better performance and accuracy.
Do you have reasons, why you decided to not use Claude Code?