MCP Resources and Prompts
MCP Resources and Prompts are being implemented by various popular AI IDEs and CLI tools (e.g. VSCode Copilot, Claude Code, Claude Desktop). I would point to VSCode specifically as my personal favorite implementation of both MCP features.
Some notable use cases under this (rather large) feature umbrella could be:
- allow Prompts to be triggered via slash-commands
- allow Resources to be attached to context window via similar slash-commands (in VSCode it uses "@" prefix, rather than "/" for resources)
- display all attached Resources as e.g. chip components in the chat UI, with the resource name/title
- allow Tool results to add Resources to the context window
I'm quite familiar with the MCP spec, as well as its implementation in the major MCP clients, so let me know what other details you'd like - more than happy to help.
i think the idea is great ❤️ however, how about agent-zero style, where we could have a "MCP creator" agent, with single- / multi-shot prompt, that could dynamically create local MCP servers? and for context, it should invoke a "web search agent" (configurable please, i run SearXNG docker / podman container, no apikey required) that searches for relevant info on how to implement an API or CLI tool call, and then pass the compiled context back to the MCP creator agent, which then writes the MCP config file and sends the tool definition to the parent agent.
I would also love this. Absolutely underrated Claude code feature imo is including MCP resources in rules. Syntax is simple, and it allows remote updating rules for companies or open source projects. Pretty neat!
Example we have in our rules:
## WordPress
- Use @wordpress-dev-docs:wordpress://rules/base as your wordpress base coding rule
- Use @wordpress-dev-docs:wordpress://rules/quality_assurance_phpstan as guideline when working with phpstan in wordpress context
- Use @wordpress-dev-docs:wordpress://rules/quries as guideline when creating wordpress queries
Doc Reference: https://docs.anthropic.com/en/docs/claude-code/mcp#use-mcp-resources
+1 to this, would make opencode immensely more usable for folks whose dev workflows are often guided by custom prompts/commands
I'd love to have the support for prompts and resources. ❤️
I'd +1 the slash-command for prompts and at-sign for resource reference.
I'm making a MCP server for storing prompts and I have noticed that woefully few MCP client/host implementations support those capabilities. I'd be very happy if I could use my MCP server one day with Opencode. In any case it would definitely open new possibilities for Opencode as well.
Huge +1 here.
My team has created an internal MCP server that we're using to share resources, prompts, and tools for all our devs under a single, authenticated, HTTP endpoint. We don't want to commit "common prompts" into all our repositories. We don't want to make devs copy/paste large prompts from some Confluence page. Exposing them via an MCP has worked well.
I have a personal mcp server just for the purpose of sharing prompts across different tools.
would be great if opencode could have those as slash commands the same way avante.nvim and other do
+1
+1
Slash commands should be coming sometime this week: https://x.com/thdxr/status/1955718582639251488
Slash commands look excellent, but sadly don't seem to work via MCP yet.
Great foundation though, shouldn't be too hard to have MCP prompts plugged into this feature
https://opencode.ai/docs/commands/
In the meantime, I'm symlinking my prompts collection all over the place 😅
+1
+1
+1
those who have recently reacted are those +1s specifically for mcp prompts?
those who have recently reacted are those +1s specifically for mcp prompts?
Yeah, prompts for MeV
those who have recently reacted are those +1s specifically for mcp prompts?
Yes. I recently added a prompt to my MCP server and I'm not even sure how I would call it from opencode without slash commands. I can see the prompt being available in the MCP inspector and as a slash command in Calude Code, but when I ask for it in opencode (Claude Sonnet 4) it can't find it:
opencode:
Claude Code:
MCP Inspector:
I think this can be best addressed once vercel ai sdk adds support for this: https://github.com/vercel/ai/issues/6294
@rekram1-node if I could push back here a little bit - it doesn't really make sense for the AI SDK to implement "prompt" support like you are suggesting. Prompts are generally exposed as slash commands or parameterized forms meant for the user to initiate and fill in, rather than something for the underlying LLM/AI to interact with. Using the mcp typescript SDK, the way that prompts should be exposed in OpenCode should be similar to how its done in Claude Code, where typing "/" makes a ListPromptsRequest for every connected MCP server and adds them to the dropdown of commands. Basically, the LLM isn't given a list of prompts like its given a list of tools - a user chooses and fills in a prompt template, which gets sent to the LLM.
@VikashLoomba we are talking about mcp server supplied prompts, we use the experimental_createMCPClient from vercel ai sdk. I am not suggesting the prompts be given to the llm. But we use the mcp client from the vercel ai sdk which only returns tools rn it won't grab prompts
Agree on:
Prompts are generally exposed as slash commands or parameterized forms meant for the user to initiate and fill in, rather than something for the underlying LLM/AI to interact with
Maybe I wasn't being clear here but we are in agreement
@VikashLoomba we are talking about mcp server supplied prompts, we use the experimental_createMCPClient from vercel ai sdk. I am not suggesting the prompts be given to the llm. But we use the mcp client from the vercel ai sdk which only returns tools rn it won't grab prompts
Agree on:
Prompts are generally exposed as slash commands or parameterized forms meant for the user to initiate and fill in, rather than something for the underlying LLM/AI to interact with
Maybe I wasn't being clear here but we are in agreement
Got it, I probably should have been more clear in my response of what I was alluding to, which was that we could use the underlying transport that's passed to the experimental_createMCPClient to send a message to the server for listing the prompts instead of waiting for the AI SDK to expose prompts in their client implementation.
yeah true
is there any chance we could expose the mcpclient so that plugin developers could pass a call to it?
Yeah we can expose that
Thank you man if were able to interact with the client in the plugin it will provide an outlet for some of the mcp servers out there are utilizing more of the spec than just the tools.
I think this can be best addressed once vercel ai sdk adds support for this: vercel/ai#6294
It appears that the vercel ai sdk functionality has been implemented. Do you plan to make the mcp prompt and resource usable by opencode?
I have a PR ready to implement prompts in opencode...my implementation
- fetches the prompts when the MCP server connects
- add each prompt as a command like
/mcp.servername.promptname - fetches the prompt when the user actually sends it and does the replacement just like another command
I've tested it locally, and it works wonderfully (and I plan to open a PR for resources later on).
Can I just go ahead and open the PR?
@paoloricciuti do you have the repo somewhere i was hopping to test something with prompts and resources?
@kziemski here's the branch on my fork...ready to open a PR whenever
https://github.com/paoloricciuti/opencode/tree/mcp-prompts
And this is how it looks like using it:
https://github.com/user-attachments/assets/e2cec744-1383-4de6-a86b-889b3186f32d