Add MCP support
This is a rough implementation of MCP into Aider. It currently support adding stdio MCP servers with this configuration in the ~/.aider.conf.yml
mcp: true
mcp-servers:
- git-server
mcp-server-command:
- "git-server:uvx mcp-server-git"
mcp-tool-permission:
- "git-server:git_status=auto"
mcp-server-env:
- "git-server:GIT_SERVER=server"
Todo:
- [ ] Add http support for remote MCP servers
- [ ] Change the way async is implemented in the mcp module ? Currently I used threading and message queue to keep everything else synchronous
- [x] Use a better structure for the config file ? I used a format that is easy to write for both the config file and the cli arguments
- [ ] Delete the command that I used to test mcp tools
- [ ] Add documentations
- [ ] Add MCP ressource
- [ ] Add MCP prompts
- [ ] Add tests
- [ ] Add anthropic's mcp dependency, since the feature depend on it. (I dont know how to add it in the requirements.txt properly)
How to interpret the config in ~/.aider.conf.yml?
Here's what I know from Claude Desktop / Roo Cline:
{
"mcpServers": {
"filesystem": {
"command": "node",
"args": [
"c:/Users/<user>/AppData/Roaming/npm/node_modules/@modelcontextprotocol/server-filesystem/dist/index.js",
"D:/"
],
"disabled": true,
"alwaysAllow": []
}
}
}
How to interpret the config in
~/.aider.conf.yml?Here's what I know from Claude Desktop / Roo Cline:
{ "mcpServers": { "filesystem": { "command": "node", "args": [ "c:/Users/<user>/AppData/Roaming/npm/node_modules/@modelcontextprotocol/server-filesystem/dist/index.js", "D:/" ], "disabled": true, "alwaysAllow": [] } } }
For this, the config should be something like:
mcp: true
mcp-servers:
- filesystem
mcp-server-command:
- "filesystem:node c:/Users/<user>/AppData/Roaming/npm/node_modules/@modelcontextprotocol/server-filesystem/dist/index.js D:/"
But now you made me realize that I havent checked if it work on Windows...
I checked your code, and I have to say - great job implementing this! While I’m not a Python developer and can’t comment on the threading aspect 😁, I do have some feedback.
One suggestion: I’d avoid using the reflection message as the MCP tool’s response. If I’m not mistaken, only three reflections are allowed by default, which could limit the agent’s usability by restricting iterations. Changing max_reflection_count would have broader effects, like impacting incorrect SEARCH/REPLACE blocks.
From a UX perspective, I’d also reconsider how servers are defined in the config file. Something like this might be more intuitive:
mcp: true
mcp-servers:
git-server:
command: "uvx mcp-server-git"
tool-permissions:
git_status: auto
env:
GIT_SERVER: server
another-server:
command: "some-command"
tool-permissions:
some_tool: auto
env:
SOME_ENV: value
I like the proposal by @wladimiiir for config formatting (no hypens).
hi @lutzleonhardt - I'm the litellm maintainer. We'd love to help aider support MCP. Here's our MCP bridge on the litellm python SDK: https://docs.litellm.ai/docs/mcp#litellm-python-sdk-mcp-bridge
any thoughts on this ? how can we help you ?
Since the YAML configuration file cannot support a nested configuration structure, I added a mcp-configuration cli argument, that point to a YAML for MCP configuration specifically, eg: In the regular configuration file:
mcp-configuration: ~/.aider/mcp.yml
In ~/.aider/mcp.yml
servers:
- name: git-server
command: "python -m mcp_server_git"
env:
GIT_SERVER: server
permissions:
git_status: auto
I removed redundant enable property in the MCP server class I added num_mcp_iterations & max_mcp_iterations so it wont clash with max_reflections I fixed some deadlock in the message queues for now
Can we have a fall back in the mcp config file names, like happens with a lot of other aider config files? That way there can be mcp config that's specific to a particular repo, but also config that is global. So the final mcp config would be ~/.aider/mcp.yml + ./.aider.mcp.yml
hi @lutzleonhardt - I'm the litellm maintainer. We'd love to help aider support MCP. Here's our MCP bridge on the litellm python SDK: https://docs.litellm.ai/docs/mcp#litellm-python-sdk-mcp-bridge
any thoughts on this ? how can we help you ?
Hi @ishaan-jaff thanks for offer to help :) Aider already uses LiteLLM to access the models. But for the MCP Bridge a dedicated server (the LiteLLM proxy) needs to be run, right? I assume this is not suitable in this case, because aider wants to stay independent of a dedicated server part.
The new commit now aggregate all the session into one async while loop in a background thread.
There is currently a problem I'm trying to debug where when we send a Ctrl-C (To cancel an llm call for example) it is sent to the mcp server too, and make it crash in a subsequent call. it cause issue with this one for example: https://github.com/modelcontextprotocol/servers/tree/main/src/git
please do not invent a new format of configuring the MCP servers !!! MCP servers configuration is already well established and changing it from json to yaml will create endless maintenance headaches for all users
From a UX perspective, I’d also reconsider how servers are defined in the config file. Something like this might be more intuitive:
mcp: true mcp-servers: git-server: command: "uvx mcp-server-git"
no, please don't!
again, the way to configure MCP servers is well established, and inventing a new configuration format will create endless headaches of incomplete translatability between classic and this new custom yaml way
May I suggest only implementing the pipe:: protocol for MCP at this time, and delaying the remote (HTTP) MCP servers ones,
as these are now known to be a new, serious security liability, essentially enabling the remote server (which can change it's mind at any time) to convince your local machine to execute arbitrary code,
let's have local MCP servers for now as this is a more understood risk -- until the dust settles on security of remote ones.
May I suggest only implementing the pipe:: protocol
No! Document the vulnerability, and let people use and experiment with the new tech. I've got in-house SSE implementations that I'd like to put into service. For leading edge tech like MCP, the suffocating nanny ethos should not be applied.
May I suggest only implementing the pipe:: protocol
No! Document the vulnerability, and let people use and experiment with the new tech. I've got in-house SSE implementations that I'd like to put into service. For leading edge tech like MCP, the suffocating nanny ethos should not be applied.
At least this functionality should be behind a separate additional opt-in config parameter (with a security disclaimer).
@jerzydziewierz
the way to configure MCP servers is well established, and inventing a new configuration format will create endless headaches of incomplete translatability between classic and this new custom yaml way
It would be helpful if someone could provide a link to a centralized specification for this configuration format — if such a thing really exists.
@ei-grad
@jerzydziewierz
the way to configure MCP servers is well established, and inventing a new configuration format will create endless headaches of incomplete translatability between classic and this new custom yaml way
It would be helpful if someone could provide a link to a centralized specification for this configuration format — if such a thing really exists.
A centralized spec does not exist, although it is proposed here: https://github.com/modelcontextprotocol/modelcontextprotocol/issues/292
In general, there is a convention adopted across apps like Cline, Cursor, Claude Desktop, Windsurf, etc. That said, LibreChat adopted yaml
May I suggest only implementing the pipe:: protocol
No! Document the vulnerability, and let people use and experiment with the new tech. I've got in-house SSE implementations that I'd like to put into service. For leading edge tech like MCP, the suffocating nanny ethos should not be applied.
At least this functionality should be behind a separate additional opt-in config parameter (with a security disclaimer).
To be clear: An MCP server can usually handle its remote access needs internally without being exposed directly to the network. Running a full MCP server over the network is rarely the preferable approach and always introduces security risks.
anyone using this mcp-featured aider branch yet?
does the current implementation support images being returned to aider? this is key for having an MCP "show aider its work", for example a screenshot corresponding to the current code
re: @lutzleonhardt
Aider already uses LiteLLM to access the models. But for the MCP Bridge a dedicated server (the LiteLLM proxy) needs to be run, right? I assume this is not suitable in this case, because aider wants to stay independent of a dedicated server part.
no, the litellm client acts as an mcp client, it doesn't need to talk to the litellm proxy server.
I tried this PR using docker container, but it seems we need mcp in the requirements.txt. And when I type /quit to quit aider, it throws RuntimeError as below
Traceback (most recent call last):
File "/venv/bin/aider", line 8, in <module>
sys.exit(main())
File "/venv/lib/python3.10/site-packages/aider/main.py", line 1157, in main
coder.run()
File "/venv/lib/python3.10/site-packages/aider/coders/base_coder.py", line 862, in run
self.run_one(user_message, preproc)
File "/venv/lib/python3.10/site-packages/aider/coders/base_coder.py", line 903, in run_one
message = self.preproc_user_input(user_message)
File "/venv/lib/python3.10/site-packages/aider/coders/base_coder.py", line 892, in preproc_user_input
return self.commands.run(inp)
File "/venv/lib/python3.10/site-packages/aider/commands.py", line 304, in run
return self.do_run(command, rest_inp)
File "/venv/lib/python3.10/site-packages/aider/commands.py", line 276, in do_run
return cmd_method(args)
File "/venv/lib/python3.10/site-packages/aider/commands.py", line 1020, in cmd_quit
self.cmd_exit(args)
File "/venv/lib/python3.10/site-packages/aider/commands.py", line 1014, in cmd_exit
stop_mcp_servers()
File "/venv/lib/python3.10/site-packages/aider/mcp/__init__.py", line 19, in stop_mcp_servers
mcp_manager.stop_servers()
File "/venv/lib/python3.10/site-packages/aider/mcp/mcp_manager.py", line 293, in stop_servers
self.mcp_thread.join()
File "/usr/local/lib/python3.10/threading.py", line 1091, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
I've found what the problem was with the MCP shutting down when we interrupt an LLM. The Ctrl-C is sending a SIGTERM or SIGINT interrupt to all the MCP servers. I've created a wrapper in bash for the MCP server to trap the interupts.
#!/bin/bash
# trapping the SIGINT signal so we ignore it
trap '' SIGINT
#trap '' SIGTERM
eval "$@"
# In the mcp config file:
servers:
- name: git-server
command: "/path/to/trap_wrapper.sh python -m mcp_server_git"
env:
GIT_SERVER: server
permissions:
git_status: auto
git_log: auto
This isnt a clean solution tho, because with this we cant gracefully shut down the MCP thread...
It looks like the Dockerfile should be instead based on python-3.12 after the new changes. I checkout the new codes but still get the error
Traceback (most recent call last):
File "aider", line 8, in <module>
sys.exit(main())
^^^^^^
File "main.py", line 1157, in main
coder.run()
File "base_coder.py", line 862, in run
self.run_one(user_message, preproc)
File "base_coder.py", line 903, in run_one
message = self.preproc_user_input(user_message)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "base_coder.py", line 892, in preproc_user_input
return self.commands.run(inp)
^^^^^^^^^^^^^^^^^^^^^^
File "commands.py", line 304, in run
return self.do_run(command, rest_inp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "commands.py", line 276, in do_run
return cmd_method(args)
^^^^^^^^^^^^^^^^
File "commands.py", line 1020, in cmd_quit
self.cmd_exit(args)
File "commands.py", line 1014, in cmd_exit
stop_mcp_servers()
File "__init__.py", line 19, in stop_mcp_servers
mcp_manager.stop_servers()
File "mcp_manager.py", line 293, in stop_servers
self.mcp_thread.join()
File "threading.py", line 1144, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
And by the way, I can't get mcp to work.. Do you mind giving a simple example how you config mcp servers, how you prompt and what is the result? Thanks.
~I'm in favor of using lightllm for MCP support, especially with the spec evolving rapidly (eg streamable http, auth, etc), it would mean aider would get support for those changes earlier (at least in theory) as litellm is used in a number of large projects. @ishaan-jaff could you could recommend a good starting point, or perhaps you could start a fork others can contribute to?~
[edit] @ishaan-jaff after looking into it, it appears litellm only supports SSE transport. I imagine there would still be a high demand for MCP servers that use the STDIO transport. Does lightllm ever plan to support STDIO? It looks like there are some sentiments that it should not be supported.
FYI Streamable HTTP support just landed in mcp python sdk main branch. Release should follow soon. This would be a good time to implement the client-side HTTP protocol here since SSE is now deprecated.
Taking nothing away from @Antonin-Deniau's work and their effort in responding to feedback – native MCP support in Aider is definitely something we need, and this PR is a valuable step. Thank you!
I wanted to share an alternative PR I've been working on that also adds stdio MCP server support to Aider and aims to address some of the concerns raised in this discussion: https://github.com/Aider-AI/aider/pull/3937.
My approach includes:
- Integration with MCP Servers using LiteLLM's MCP Tool Bridge and the Python MCP SDK (so we can have that streamable HTTP transport @bendavis78)
- Adherence to the standard MCP Server Configuration schema (like Claude, Cursor, etc.)
- Operation on a single thread
I'd be grateful if folks could take a look and share their thoughts. I believe it offers a slightly different perspective on solving this problem.
~@quinlanjager I could be mistaken, but I didn't see that litellm's mcp bridge supports stdio transport servers.~
[edit] I just saw your PR includes support for studio 👍
Should this PR be closed in favor of #3937 ?
I would say so