aider
aider copied to clipboard
[Feature Proposal] Aider Macros
Aider Macros
This fork adds Aider Macros—a lightweight Python DSL for scripting multi‑step, parallel, “agentic” workflows directly inside Aider.
Why Macros?
The existing /load command is useful, but it has three key limitations:
| Limitation | Impact |
|---|---|
| Sequential‑only | No loops, branching, or early exits |
| No arguments | Steps can’t pass data to one another efficiently |
| No parallelism | Can’t fan out multiple LLM calls and gather results |
Aider Macros lift these constraints while remaining interactive and deterministic.
Key Features
-
Flexible control flow – Use standard Python loops, conditionals, and functions around LLM invocations.
-
Spawn / gather concurrency – Run up to N models in parallel, then
gather()their outputs. Includes an ncurses‑style progress line so you can watch them finish in real time. -
Tool use built‑in – A new
/searchcommand (OpenRouter) is exposed to macros for quick web look‑ups. -
Deterministic & testable – Macro logic is plain Python, so you can unit‑test or lint it like any other code.
-
Gentle “agentic” path – Adds tool use and light autonomy without going fully headless or uncontrollably recursive.
Inspirations
-
“Loops as LLM Calls”
-
Eric Elliott’s SudoLang
-
Aider’s existing /load and architect modes
Relationship to Core Aider
Architect mode already offers macro‑like behaviour, but Aider Macros generalise the idea and deprecate /load and /architecture. The README has been rewritten to focus on this new system; full docs will be restored and expanded if there’s interest in upstreaming.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.
@paul-gauthier Hey this is still a draft. But wanted to float the idea.
Will probably maintain this as a fork, would like to start automating some of the workflows I am doing on llms for game development screens loops, web of thoughts, etc...
I have found poor luck with automatic agent, but lots of success with /architect and critic chain style reasoning.
The Examples section in your fork is particularly worth a look, for anyone interested. Nice work!
@nbardy i question how this is different to MCP as it just feels the entire approach can work with the mcp rpc, maybe an abstraction ontop of that. im not a fan of tools creating their own REPL DSL. Reminds me of https://xkcd.com/927/.
@pcfreak30 This is very different from MCP. Solves a different case. And is compatible with MCP. Macros .chat commands can still call MCP tooling inside the macro. Any time you call an LLM it can call MCP tooling.
This is a later around tool caling.
Most importantly this is different from MPC because the macro is handling the control flow NOT the LLM.
If you want to run test driven development you need to make sure the tests run every time. You can ask the LLM to call tests for you but it only doesn't so wiht some 85% accuracy. It can also rewrite your tests.
The idea is that you want to run a command "Write a code patch", that is interpretered by the LLM, can call MCP, etc..
However that llm commands runs in a program context with a deterministic control loop. So the python code calls and checs the test every time and can feed that back into the . But the model can rewrite tests or skip running tests, that is guaranteed from the deterministic python control flow.
I'm now developing my own CLI tool around this because the PR isn't getting much traction so I'll close it.