[BUG]: Have to launch from command-line for MCP server connections to not fail
It's great that AnythingLLM now supports MCP servers. This opens up a world of possibilities. Thank you. However, I'm having a hard time getting it to work as expected.
How are you running AnythingLLM?
AnythingLLM desktop app
What happened?
My installation of AnythingLLM (v1.8.2) for macOS (v15.5) can only connect to MCP servers (both local and remote) if I launch AnythingLLM it from the command-line.
I don't have this issue with Claude desktop app and BoltAI app, using the same servers.
It suggests to me that there's something particular about AnythingLLM that is resulting in it having an incomplete Node/npx environment when launched as an app.
Front-end Errors are:
Failed to start MCP server: desktop-commander [-32000] MCP error -32000: Connection closed
Failed to start MCP server: monday-api-mcp-hosted [-32000] MCP error -32000: Connection closed
etc.
Related logs:
{"level":"info","message":"\u001b[36m[r]\u001b[0m Failed to start MCP server: iMCP {\"error\":\"spawn /private/var/folders/f5/xj8j7h19193_32dbgh9sfwgr0000gn/T/AppTranslocation/D3E2387F-2F04-42B1-BFD2-82A30E386F32/d/iMCP.app/Contents/MacOS/imcp-server ENOENT\",\"code\":\"ENOENT\",\"syscall\":\"spawn /private/var/folders/f5/xj8j7h19193_32dbgh9sfwgr0000gn/T/AppTranslocation/D3E2387F-2F04-42B1-BFD2-82A30E386F32/d/iMCP.app/Contents/MacOS/imcp-server\",\"path\":\"/private/var/folders/f5/xj8j7h19193_32dbgh9sfwgr0000gn/T/AppTranslocation/D3E2387F-2F04-42B1-BFD2-82A30E386F32/d/iMCP.app/Contents/MacOS/imcp-server\",\"stack\":\"Error: spawn /private/var/folders/f5/xj8j7h19193_32dbgh9sfwgr0000gn/T/AppTranslocation/D3E2387F-2F04-42B1-BFD2-82A30E386F32/d/iMCP.app/Contents/MacOS/imcp-server ENOENT\\n at ChildProcess._handle.onexit (node:internal/child_process:283:19)\\n at onErrorNT (node:internal/child_process:476:16)\\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21)\"}","service":"backend"}
{"level":"info","message":"\u001b[36m[r]\u001b[0m Attempting to start MCP server: monday-api-mcp-hosted","service":"backend"}
{"level":"info","message":"\u001b[36m[r]\u001b[0m Failed to start MCP server: monday-api-mcp-hosted {\"error\":\"MCP error -32000: Connection closed\",\"code\":-32000,\"stack\":\"McpError: MCP error -32000: Connection closed\\n at Client._onclose (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/shared/protocol.js:101:23)\\n at _transport.onclose (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/shared/protocol.js:73:18)\\n at ChildProcess.<anonymous> (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/client/stdio.js:97:77)\\n at ChildProcess.emit (node:events:513:28)\\n at maybeClose (node:internal/child_process:1091:16)\\n at Socket.<anonymous> (node:internal/child_process:449:11)\\n at Socket.emit (node:events:513:28)\\n at Pipe.<anonymous> (node:net:322:12)\"}","service":"backend"}
{"level":"info","message":"\u001b[36m[r]\u001b[0m Attempting to start MCP server: monday-api-mcp","service":"backend"}
{"level":"info","message":"\u001b[36m[r]\u001b[0m Failed to start MCP server: monday-api-mcp {\"error\":\"MCP error -32000: Connection closed\",\"code\":-32000,\"stack\":\"McpError: MCP error -32000: Connection closed\\n at Client._onclose (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/shared/protocol.js:101:23)\\n at _transport.onclose (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/shared/protocol.js:73:18)\\n at ChildProcess.<anonymous> (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/client/stdio.js:97:77)\\n at ChildProcess.emit (node:events:513:28)\\n at maybeClose (node:internal/child_process:1091:16)\\n at Socket.<anonymous> (node:internal/child_process:449:11)\\n at Socket.emit (node:events:513:28)\\n at Pipe.<anonymous> (node:net:322:12)\"}","service":"backend"}
{"level":"info","message":"\u001b[36m[r]\u001b[0m Successfully started 0 MCP servers: []","service":"backend"}
{"level":"info","message":"\u001b[36m[r]\u001b[0m Attempting to start MCP server: desktop-commander","service":"backend"}
{"level":"info","message":"\u001b[36m[r]\u001b[0m Failed to start MCP server: desktop-commander {\"error\":\"MCP error -32000: Connection closed\",\"code\":-32000,\"stack\":\"McpError: MCP error -32000: Connection closed\\n at Client._onclose (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/shared/protocol.js:101:23)\\n at _transport.onclose (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/shared/protocol.js:73:18)\\n at ChildProcess.<anonymous> (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/client/stdio.js:97:77)\\n at ChildProcess.emit (node:events:513:28)\\n at maybeClose (node:internal/child_process:1091:16)\\n at Socket.<anonymous> (node:internal/child_process:449:11)\\n at Socket.emit (node:events:513:28)\\n at Pipe.<anonymous> (node:net:322:12)\"}","service":"backend"}
{"level":"info","message":"\u001b[36m[r]\u001b[0m Attempting to start MCP server: iMCP","service":"backend"}
{"level":"info","message":"\u001b[36m[r]\u001b[0m Failed to start MCP server: iMCP {\"error\":\"spawn /private/var/folders/f5/xj8j7h19193_32dbgh9sfwgr0000gn/T/AppTranslocation/D3E2387F-2F04-42B1-BFD2-82A30E386F32/d/iMCP.app/Contents/MacOS/imcp-server ENOENT\",\"code\":\"ENOENT\",\"syscall\":\"spawn /private/var/folders/f5/xj8j7h19193_32dbgh9sfwgr0000gn/T/AppTranslocation/D3E2387F-2F04-42B1-BFD2-82A30E386F32/d/iMCP.app/Contents/MacOS/imcp-server\",\"path\":\"/private/var/folders/f5/xj8j7h19193_32dbgh9sfwgr0000gn/T/AppTranslocation/D3E2387F-2F04-42B1-BFD2-82A30E386F32/d/iMCP.app/Contents/MacOS/imcp-server\",\"stack\":\"Error: spawn /private/var/folders/f5/xj8j7h19193_32dbgh9sfwgr0000gn/T/AppTranslocation/D3E2387F-2F04-42B1-BFD2-82A30E386F32/d/iMCP.app/Contents/MacOS/imcp-server ENOENT\\n at ChildProcess._handle.onexit (node:internal/child_process:283:19)\\n at onErrorNT (node:internal/child_process:476:16)\\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21)\"}","service":"backend"}
{"level":"info","message":"\u001b[36m[r]\u001b[0m Attempting to start MCP server: monday-api-mcp-hosted","service":"backend"}
{"level":"info","message":"\u001b[36m[r]\u001b[0m Failed to start MCP server: monday-api-mcp-hosted {\"error\":\"MCP error -32000: Connection closed\",\"code\":-32000,\"stack\":\"McpError: MCP error -32000: Connection closed\\n at Client._onclose (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/shared/protocol.js:101:23)\\n at _transport.onclose (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/shared/protocol.js:73:18)\\n at ChildProcess.<anonymous> (/Applications/AnythingLLM.app/Contents/Resources/backend/node_modules/@modelcontextprotocol/sdk/dist/cjs/client/stdio.js:97:77)\\n at ChildProcess.emit (node:events:513:28)\\n at maybeClose (node:internal/child_process:1091:16)\\n at ChildProcess._handle.onexit (node:internal/child_process:302:5)\"}","service":"backend"}
Please advise if there is some way to make AnythingLLM connect to MCP servers without to launch AnythingLLM via command line (i.e. when it is launched from the Desktop GUI)
Are there known steps to reproduce?
No response
I have the same problem
This almost certainly has to do with ENV inheritance. What does your config for MCP look like. It is either the ENV or the execution permissions of the MCP command not being allow - the error message MCP gives back does not help that much with debugging.
Considering spawn /private/var/folders/f5/xj8j7h19193_32dbgh9sfwgr0000gn/T/AppTranslocation/ is trying to spawn form likely a userspace protected folder that is my current hunch.
The same issue Applied configurations work fine for Claude and other MCP agents, but do not work for AnythingLLM tested with various MCP servers both public and developed
I'm having the same issue as well with multiple mcp servers, using python and npm
"mcpServers": {
"kali_mcp": {
"command": "python",
"args": [
"/Users/mohammad/Projects/MCP-Kali-Server/mcp_server.py",
"--server",
"http://localhost:3002/"
],
"type": "stdio"
},
}
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": [
"@playwright/mcp@latest"
]
}
}
}
I'm having the same issue with docker installed and working. Anything LLM just don't recognize this server. This one:
"context7": {
"command": "npx",
"args": ["-y", "@upstash/context7-mcp"]
},
on the other hand works.
Same issue
Same issue even with a local MCP (stdio)
This almost certainly has to do with ENV inheritance. What does your config for MCP look like. It is either the ENV or the execution permissions of the MCP command not being allow - the error message MCP gives back does not help that much with debugging.
Considering
spawn /private/var/folders/f5/xj8j7h19193_32dbgh9sfwgr0000gn/T/AppTranslocation/is trying to spawn form likely a userspace protected folder that is my current hunch.
Thanks @timothycarambat . Since posting my Issue report, I've been busy with other projects, and I've not had time to look into this since then. But I wanted to acknowledge your response and suggestions. I will try this out as soon as I am able.
In the mean time, it appears other have encountering what might be the same issue. Perhaps they also have env issues in their MCP config?
I just tried with "empty" MCP server STDIO and still facing same error
Update: Only fails on MacOS, Windows works fine
Same problem here. Stops me from using your product. Want to connect to Notion and Jira.
I believe the problem is here
https://github.com/Mintplex-Labs/anything-llm/blob/b44cf21caac5bf086d7a6ed41ee87907c7629a8b/server/utils/MCP/hypervisor/index.js#L239
let baseEnv = {
PATH: process.env.PATH || "/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin",
NODE_PATH: process.env.NODE_PATH || "/usr/local/lib/node_modules",
};
I have npm installed via homebrew, with homebrew's path in my zsh profile.
I'll have this issue launching from bash as well since the path isn't set.
I'm not too familiar with node but /opt/homebrew/lib/node_modules is the NODE_PATH where npm is installed via homebrew.
Version 1.8.5 r2. : Failed to start MCP server: sequentialthinking [-32000] MCP error -32000. MCP servers works normaly from claude desktop and postman!
@zuicis The docker alias is probably not available in the subshell we use, which is why it would work via manual execution where the alias is available.
If you where docker and use that for the command it should work.
@zuicis The
dockeralias is probably not available in the subshell we use, which is why it would work via manual execution where the alias is available.If you
where dockerand use that for the command it should work.
Hi, I understood your point, but the issue persists.
- I manually added the PATH for the system so that when calling the docker command it looks in C:\Program Files\Docker\Docker\resources\bin\
- I tried the suggested command where docker
- I tried specifying the full path to docker.exe – C:\Program Files\Docker\Docker\resources\bin\docker.exe
It looks like now, when the PATH is set and I use "docker", or when I use the "full path to docker.exe", then anythingllm also finds Docker Desktop but connection to Docker is not being kept open. At the moment when I restart the MCP servers (using anythingllm’s restart option), the containers appear briefly and then immediately disappear also.
I’ll test a few more options and wait for the next version :) Have a productive day!
the containers appear briefly and then immediately disappear also.
Seems like docker might be starting these containers with -rm. Can you copy/paste that mcp_config (omit credentials if present). I think #4299 will fix this
I'm using Anything LLM Desktop version under MacOS. First of all it seems the value of the command key within the definition of an MCP server within the ~/Library/Application Support/anythingllm-desktop/storage/plugins/anythingllm_mcp_servers.json file MUST be an absolute path. The same that is returned by the "which" command in a command prompt. E.g. if which uv returns /Users/myusername/.local/bin/uv then /Users/myusername/.local/bin/uv has to be the value of the command key instead of uv. Once the configuration file has been updated and the MCP servers refreshed (Settings/Agent Skills/MCP Servers/Refresh) it should show it as "On" and if you click on it then the offered tools are also displayed. In a lucky case the MCP Server can really be used by the LLM. Unfortunately there are cases in which the MCP Server is also using some commands or special packages that are also inaccessible in the environment provided by AnythingLLM at MCP Server call. Currently I do not know the solution for such a case. I encountered it with mcp-pandoc that uses pypandoc for document conversion and works fine with LM Studio but not with Anything LLM. How can we influence the environment the MCP Server sees? Probably a podman or docker container encapsulation of the MCP Server together with the tools it uses solves the problem but I have not tested that, yet.
Confirming what @timothycarambat and @jmvalente already suspected: this is an ENV inheritance issue on macOS. Launching from Finder/Spotlight breaks MCP servers, but launching from terminal with open -a "AnythingLLM" works reliably.
Environment
- AnythingLLM Desktop v1.9.0
- macOS Sequoia 15.6.1 (Apple M2 Pro)
What I tested
- ❌ Launching from Finder/Spotlight → MCP servers fail with
[-32000] Connection closed. - ✅ Launching from terminal with:
→ MCP servers connect and tools are available.open -a "AnythingLLM"
Why
- As @timothycarambat noted, Finder‑launched apps don’t inherit the user’s shell environment.
- As @jmvalente pointed out, the code hard‑codes a fallback PATH/NODE_PATH if
process.envis empty. On macOS, that’s exactly what happens when launched from Finder. - So the app never sees Homebrew’s
/opt/homebrew/binor/opt/homebrew/lib/node_modules.
Workarounds
- Reliable: Launch from terminal (works every time).
- Partial: Absolute paths in
anythingllm_mcp_servers.jsonSome users reported absolute paths helped, but in my case (and others) it didn’t solve the issue, which suggests the problem is broader than just PATH resolution? - Untested:
launchctl setenv PATH "..."at login could, in theory, fix Finder launches by injecting a PATH into the GUI session. But even with a sanitized PATH (system + toolchain dirs only), this feels like a blunt workaround rather than a proper solution. I haven’t tested it and would appreciate input from others on whether it’s safe practice.
Suggestion
- Short‑term: Maybe add the terminal‑launch workaround for macOS in the docs; MCP Compatibility → MCP on Desktop - It took me a long time to figure this much out.
- Long‑term: Patch the desktop app to explicitly load the user’s login shell environment on macOS, instead of relying on a minimal
process.env.
Would love confirmation from someone more seasoned - is the launchctl approach safe, or is there a better way to inject the user’s PATH into Finder‑launched apps? A proper patch would be ideal, but I don’t have the skill to attempt it myself.
@Sinkingdev This is the best and most helpful GitHub comment I have seen in a while. We will properly patch this - easiest fix is likely doing something simple like what fix-path does on Mac/Linux..
Using launchctl would likely work as well but I would rather not. The safer option is run from terminal. If we do the path fix it will work for macOS, which is particularly the OS with this issue.
@timothycarambat Oh wow. Thanks for the feedback. This is mostly new territory for me, so your input is really valuable. Glad I could help move things forward!
Unfortunately the workaround @Sinkingdev suggested to start Anything LLM from terminal by the 'open -a "AnythingLLM"' command did not work for me with the mcp-pandoc MCP Server. Anything LLM displayed the following response message when the LM Studio hosted LLM (openai/gpt-oss-120b) tried to use the tool: "I’m sorry, but I can’t complete the conversion at the moment because the tool that turns Markdown into an ODT document (pandoc) isn’t available on this system." In fact Pandoc is installed and mcp-pandoc works fine with LM Studio 0.3.30.
I believe that the workaround suggested by @Sinkingdev may solve the command finding problem to start the MCP Server but the MCP Servers themselves still do not find third party libraries (e.g. the installed pypandoc in my case) because something is missing from the environment in which Anything LLM starts the MCP Server.
I still believe that a podman or docker container encapsulation of the MCP Server together with the tools it uses solves the problem but I still have not tested that, because I'm new to container image creation and did not have the time for the deep dive.
I would also consider adding a text field for the PATH variable in the application settings and making that the highest priority (then process env, then fall back) when loading. Alternatively a “long text” field for multiple lines if more than just the path needs to be set
This also avoids having to check for variations in shells (.bashrc .bash_profile, .zshrc, etc) when loading the shell env and provides a way to set the path and other vars specifically for anythingllm.
I would also consider adding a text field for the
PATHvariable in the application settings and making that the highest priority (then process env, then fall back) when loading. Alternatively a “long text” field for multiple lines if more than just the path needs to be setThis also avoids having to check for variations in shells (
.bashrc .bash_profile, .zshrc, etc) when loading the shell env and provides a way to set the path and other vars specifically for anythingllm.
Although it seems to be a flexible solution I'm not sure if it simplifies the life of the users or solves all problems.
What could be the difference between the way how LM Studio or VS Code extensions like Kilocode invoke MCP Servers vs Anything LLM?
If the same method could be followed then there would be no need of platform/environment specific tweaks on user level.
(Edit: Yes, the same mcp-pandoc MCP Server works fine under Kilocode as well without setting PATH or other environment variables, just the standard MCP configuration JSON block was added to the config. Fortunately Kilocode is open source, so it is possible to compare the way how they launch/provide environment to MCP Servers: https://github.com/Kilo-Org/kilocode/tree/main/src/services/mcp )