Confusion around configuring MCP in Void
- Press
Cmd+Shift+Pin Void, and typeHelp: About. Please paste the information here. Also let us know any other relevant details, like the model and provider you're using if applicable.
VSCode Version: 1.99.30036
Void Version: 1.4.1
Commit: 406d0dc2975f171c65236e1cd8891d38ce7e349d
Date: 2025-05-31T06:44:18.511Z (3 days ago)
Electron: 34.3.2
ElectronBuildId: undefined
Chromium: 132.0.6834.210
Node.js: 20.18.3
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.5.0
- Describe the issue/feature here!
First, it's a bit confusing to set up MCP on Void since the command palette shows the vscode-mcp configuration and to configure Void's you need to click the gear icon in the Void chat.
Next, once I found difference and started to set up Void, I noticed that Void kept failing, even though the MCP servers worked on the vscode side:
Also QoL suggestion: make the command copy-able so the user can test the command separately
I'm experiencing the same thing. I can verify that the Hubspot MCP server is awaiting requests, but Void is unable to work with it or find it for whatever reason.
All helpful feedback (also the comment re: gear icon).. I tried these mcp servers i.e. hubspot, sentry, seq. thinking etc. and they seem to work fine (see screenshots)
I think the ENOENT error in both issues above may have to do with npx's path settings. Possibly due to Void not inheriting your shell’s PATH properly, esp.on macOS ARM (Darwin arm64) where Node is might be installed via Homebrew or nvm.
Maybe try running which npx in a terminal - and if it gives a valid path - run open -a Void from that same term. Then try these mcp servers again or save mcp.json to refresh.
All helpful feedback (also the comment re: gear icon).. I tried these mcp servers i.e. hubspot, sentry, seq. thinking etc. and they seem to work fine (see screenshots)
![]()
I think the ENOENT error in both issues above may have to do with npx's path settings. Possibly due to Void not inheriting your shell’s PATH properly, esp.on macOS ARM (Darwin arm64) where Node is might be installed via Homebrew or nvm. Maybe try running
which npxin a terminal - and if it gives a valid path - runopen -a Voidfrom that same term. Then try these mcp servers again or save mcp.json to refresh.
Are you able to use the MCP servers because I had attached the mcp server of github and gave a prompt to create a repository as it is one of the tools of github's mcp server. But it did not create a repository.
In Mac, the command parameter for MCP must use an absolute path, but npx tools will report an" MCP error 32000", and even if configured successfully, the local model still doesn't invoke the MCP tools successfully
- snapshot of using absolute path:
- use
open -a Voidto Open the Editor, and useqwen2.5-coder-32b-instructoras llm model
@vrtnis Thanks for the response. open -a Void seems to work for my machine (I have npx installed via volta, which is similar to nvm).
Would it be possible to use the $PATH in Void, similar to how it is used on the command palette side, so Void doesn't need to be opened via the terminal?
@vrtnis Thanks for the response.
open -a Voidseems to work for my machine (I havenpxinstalled via volta, which is similar to nvm).Would it be possible to use the
$PATHin Void, similar to how it is used on the command palette side, so Void doesn't need to be opened via the terminal?
one quick answer is that you can just add the volta path to your runtime args. so do a ctrl+shift+p - configure runtime args i.e. argv.json and add your volta path to your env - so something like { "env": { "PATH": "/Users/<youruser>/.volta/bin:/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin" } }
@vrtnis Thanks for the response.
open -a Voidseems to work for my machine (I havenpxinstalled via volta, which is similar to nvm). Would it be possible to use the$PATHin Void, similar to how it is used on the command palette side, so Void doesn't need to be opened via the terminal?one quick answer is that you can just add the volta path to your runtime args. so do a ctrl+shift+p - configure runtime args i.e. argv.json and add your volta path to your env - so something like
{ "env": { "PATH": "/Users/<youruser>/.volta/bin:/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin" } }
Thanks! Any idea why this explicit runtime arg is needed for Void but not for VSCode / the command palette? Would be great if it picked up the environment automatically.
same problem here, so it would be great if you can fix them on Apple M series chips devices. I am using macbook pro M4 chip, not working for me.
Maybe try running which npx in a terminal - and if it gives a valid path - run open -a Void from that same term. Then try these mcp servers again or save mcp.json to refresh.
Path was valid and no change. I'm on an M2 using nvm.
@mymark21
I was able to get further. I use nvm, so likely there was some weird path parsing issue you have to make explicit.
- Get path for node - which node
- Make sure to install MCP server you want to run
- Get path for mcp server node module index i.e. "/Users/username/.nvm/versions/node/v18.20.8/lib/node_modules/@hubspot/mcp-server/dist/index.js"
- Exit and reopen
What made the difference for me was setting the actual path of the MCP server.
Additionally what I found is that my locally hosted LLM still wasn't referencing the MCP servers even if they were correctly installed.
I'm removing nvm and seeing if that makes a difference.
Appears that the issue is not w/ nvm at all.
I was able to get further. I use nvm, so likely there was some weird path parsing issue you have to make explicit.
- Get path for node - which node
- Make sure to install MCP server you want to run
- Get path for mcp server node module index i.e. "/Users/username/.nvm/versions/node/v18.20.8/lib/node_modules/@hubspot/mcp-server/dist/index.js"
- Exit and reopen
What made the difference for me was setting the actual path of the MCP server.
Additionally what I found is that my locally hosted LLM still wasn't referencing the MCP servers even if they were correctly installed.
I'm removing nvm and seeing if that makes a difference.
hi ,so . i need to change "command" in config right? confused at step3
Thanks for the report, this seems to be a path issue, but it's not consistent. Will look into this.
Also see @vrtnis's response in #808 for a recap
Related:
#838 #705 #770 #701 #789 #833 #656 #419 #752 #701
https://github.com/voideditor/void/issues/808#issuecomment-3169607644
Hello folks. I'm reviewing quite a lot issues of MCPs and seems like this thread has been refered very often - to get rid of duplication, I'm going to ask here.
I am doing a MCP server, using stdio, through Microsoft's C# SDK for MCP. I am very convinced that I have my configuration file (C:/Users/<current-user-name>/.void-editor/mcp.json) is pretty well-defined since it can
- be able to be configured, run, connected and called by LLM in VSCode (Github) Copilot;
- be able to be configured, run and connected in Void.
and by configured I mean a green light in settings, like this:
The output of the server looks pretty much the same in both console, for example, they all look like this:
However, when it comes to asking AI to call the tools, AI (gemini-2.5-pro, to be more specifically) refused to call the tool and insisted that the thing does not exist at all, while AI on VSCode side called that quite normally and obtained a nice-and-correct answer.
I am very curious about the mechanism of this "bug" I assume, and also with its solution. Attaching the different responses of AIs on different side.
VSCode:
Void:
Hi @Myriad4Link ,
Could you provide further details about your configurations / codes / environment?
Mine is working correctly under the same framework and prompts (except the model; I can't find gemini-2.5-pro in the list).
Or provide the code like below:
https://github.com/sblzdddd/VoidMCPExample
Hello folks. I'm reviewing quite a lot issues of MCPs and seems like this thread has been refered very often - to get rid of duplication, I'm going to ask here.
I am doing a MCP server, using stdio, through Microsoft's C# SDK for MCP. I am very convinced that I have my configuration file (
C:/Users/<current-user-name>/.void-editor/mcp.json) is pretty well-defined since it can
- be able to be configured, run, connected and called by LLM in VSCode (Github) Copilot;
- be able to be configured, run and connected in Void.
and by configured I mean a green light in settings, like this:
The output of the server looks pretty much the same in both console, for example, they all look like this:
However, when it comes to asking AI to call the tools, AI (gemini-2.5-pro, to be more specifically) refused to call the tool and insisted that the thing does not exist at all, while AI on VSCode side called that quite normally and obtained a nice-and-correct answer.
I am very curious about the mechanism of this "bug" I assume, and also with its solution. Attaching the different responses of AIs on different side.
VSCode:
Void:
![]()
Hi @sblzdddd. I think - very unexpectedly - the remark " (except the model; I can't find gemini-2.5-pro in the list)" is actually the most significant insight that you've provided in your greatly appreciated reply. I'm saying that because after trying with gemini-2.0-flash I've noticed the problem is not any related with MCP server's configurations themselves, but is the fault custom models' configurations. You see, gemini-2.0-flash do went well on my local machine also:
While gemini-2.5-pro and gemini-2.5-flash does not, because these models are custom models. Void's own LLM provider model presets are, to some extent, out-of-date and since gemini-2.5-pro and gemini-2.5-flash ones are the state-of-art model that Google provide, so I add them to the list manually.
The difference comes here: in Advanced Settings of gemini-2.0-flash, a line that sets: "specialToolFormat": "gemini-style", which is not presented in the defaults of custom models. If you append the line into those customs' Advanced Settings, all things work well, normal and prefect.
I have to make appreciation for your reply once again, and to be honest I am quite stupid not to think of that. Anyways, case solved.
Hi @Myriad4Link , Could you provide further details about your configurations / codes / environment? Mine is working correctly under the same framework and prompts (except the model; I can't find gemini-2.5-pro in the list). Or provide the code like below: https://github.com/sblzdddd/VoidMCPExample
![]()
Hello folks. I'm reviewing quite a lot issues of MCPs and seems like this thread has been refered very often - to get rid of duplication, I'm going to ask here. I am doing a MCP server, using stdio, through Microsoft's C# SDK for MCP. I am very convinced that I have my configuration file (
C:/Users/<current-user-name>/.void-editor/mcp.json) is pretty well-defined since it can
- be able to be configured, run, connected and called by LLM in VSCode (Github) Copilot;
- be able to be configured, run and connected in Void.
and by configured I mean a green light in settings, like this:
The output of the server looks pretty much the same in both console, for example, they all look like this:
However, when it comes to asking AI to call the tools, AI (gemini-2.5-pro, to be more specifically) refused to call the tool and insisted that the thing does not exist at all, while AI on VSCode side called that quite normally and obtained a nice-and-correct answer. I am very curious about the mechanism of this "bug" I assume, and also with its solution. Attaching the different responses of AIs on different side. VSCode:
Void:
The output of the server looks pretty much the same in both console, for example, they all look like this:
However, when it comes to asking AI to call the tools, AI (gemini-2.5-pro, to be more specifically) refused to call the tool and insisted that the thing does not exist at all, while AI on VSCode side called that quite normally and obtained a nice-and-correct answer.
Void:
