feat: Add support for MCP Resources
Summary
This PR follows up on the closed PR https://github.com/google-gemini/gemini-cli/pull/1789 and solves #1459 by adding support for MCP Resources feature.
Features added:
- MCP Resources discovery by reading
resources/listfrom the MCP servers - Listen to server notifications and refresh all resources automatically
- Add list of available resources to the /mcp status UI for each of the connected servers
- Add the possibility to tag and search through the resources with the @ command
Details
I have followed through the work that @heartyguy started but also based most of the syntax and patterns on the MCP prompts functionality.
I also tried to solve most of the problems of the initial PR and follow the suggestions of @jakemac53:
- handle resource pagination correctly
- handle resource list updated via listChanged notification
Out of scope:
- Core tools such as resources/list and resource/read for the model to automatically fetch MCP resources. This will follow up here: https://github.com/MrLesk/gemini-cli/pull/1 (will create a PR against the main repo as soon as this PR is merged)
- Resource templates: this feature requires a change in the system prompt and a more complex workflow. The model would have to fill the URI before calling resource read.
- Binary resources: currently we are returning
[Binary content not inlined: ${sizeBytes} bytes]as a placeholder - Single resource updates or subscription: this can be part of a future enhancement. For the moment I used an MVP approach.
Related Issues
Closes #1459
How to Validate
I tested successfully with Backlog.md MCP Server resources and additionally with the @modelcontextprotocol/server-everything.
Pre-Merge Checklist
- [x] Updated relevant documentation and README (if needed)
Done: updated
docs/tools/mcp-server.md - [x] Added/updated tests (if needed)
- [x] Noted breaking changes (if any) No breaking changes expected
- [ ] Validated on required platforms/methods:
- [x] MacOS
- [x] npm run
- [ ] npx
- [ ] Docker
- [ ] Podman
- [ ] Seatbelt
- [ ] Windows
- [ ] npm run
- [ ] npx
- [ ] Docker
- [ ] Linux
- [ ] npm run
- [ ] npx
- [ ] Docker
- [x] MacOS
Summary of Changes
Hello @MrLesk, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly extends the capabilities of the Gemini CLI by integrating support for MCP Resources. It enables the CLI to seamlessly discover, manage, and utilize contextual data provided by MCP servers, enhancing the model's ability to access and incorporate external information. This feature builds upon existing MCP functionalities for tools and prompts, providing a more comprehensive interaction model with connected services.
Highlights
- MCP Resources Support: The Gemini CLI now supports Model Context Protocol (MCP) Resources, allowing it to discover, list, read, and interact with resources exposed by MCP servers.
- New Tools: Two new built-in tools,
list_resourcesandread_resource, have been added.list_resourcesallows the model to retrieve a list of available resources, whileread_resourceenables it to fetch the content of a specific resource URI. - Automatic Discovery and Refresh: The CLI automatically discovers MCP resources by reading
resources/listfrom connected servers. It also listens forlistChangednotifications from servers to refresh the resource registry automatically, ensuring up-to-date information. - Enhanced UI and User Interaction: The
/mcp statusUI now displays a list of available resources for each connected server. Users can also reference and search through resources using the familiar@command syntax, similar to how local files are handled. - Robust Resource Handling: The implementation correctly handles resource pagination during discovery and provides mechanisms to summarize binary resource content without inlining raw data into the conversation.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.
This is awesome!
Hi @jackwotherspoon I updated this PR to the latest main. Let me know if I should split it into smaller chunks or leave it like this
Going to take a detailed pass of the PR today @MrLesk 👍
Do you see any natural split points for smaller PRs off the top of your head?
Going to take a detailed pass of the PR today @MrLesk 👍
Do you see any natural split points for smaller PRs off the top of your head?
@jackwotherspoon I think the only standalone chunk that we could move out is the
Add the possibility to tag and search through the resources with the @ command
It's not a big chunk but functionality wise doesn't have to be part of this PR
One point of clarification-- to confirm my understanding, we are both loading all resources and also providing a tool for the model to do that ad hoc? I'm sure this is useful, just trying to understand why
Would you be open to splitting up the addition of resource loadin/explicit user resource invocation into a first PR and then a separate PR adding support for the core tools?
One point of clarification-- to confirm my understanding, we are both loading all resources and also providing a tool for the model to do that ad hoc? I'm sure this is useful, just trying to understand why
Hi @chrstnb
The elicitation of resources is being used for the MCP Status view and this is useful for the gemini-cli user:
I followed the same approach that was used with the prompts list.
The model also needs the list of resources as a tool in case the model thinks it can find important information there:
Would you be open to splitting up the addition of resource loadin/explicit user resource invocation into a first PR and then a separate PR adding support for the core tools?
If I didn't get it wrong you would like to split the PR into 2 parts:
- Load resources, show them in the MCP Status view and let the users @ those resources to add them to the context like for files
- Allow the model to request for the resource list or read a single resource via core tools?
Is that correct?
Would you be open to splitting up the addition of resource loadin/explicit user resource invocation into a first PR and then a separate PR adding support for the core tools?
If I didn't get it wrong you would like to split the PR into 2 parts:
- Load resources, show them in the MCP Status view and let the users @ those resources to add them to the context like for files
- Allow the model to request for the resource list or read a single resource via core tools?
Is that correct?
Yes! If it's not too much work to tease the two apart, I think that will make things easier.
Yes! If it's not too much work to tease the two apart, I think that will make things easier.
Sure. Will have it done by tomorrow
Hi @chrstnb I moved out the core tools part in a separate branch and cleaned up the documentation as you requested in the comment. I also removed any reference for the core tools from this PR description.
I will wait for your feedback about the code for this PR
Hey @chrstnb @jackwotherspoon I've finished updating the PR according to your comments.
Please notice that while I was rebasing onto the latest main, I've noticed that https://github.com/google-gemini/gemini-cli/pull/14375 got merged and I took the opportunity to align tools and resources refresh logic and be consistent with it in https://github.com/google-gemini/gemini-cli/pull/13178/commits/3e602eb1654429ae700af71adf11318f24fb5172
Follow up PRs after this one is merged:
- Add read resource/list resources core tools (the logic that I initially moved out from this PR)
- Handle the cases with >20mb resources
- [Optional] Handle prompts list update. Since we now have refresh for tools and resources, prompts would be an easy addition