[WIP] Create model context protocol for MP's API
Model context protocols (MCPs) are a generally architecture agnostic standard for allowing LLMs to access the key functionalities of an API or other walled-off data (or abstract functionality)
Want to note that a previous effort within LBL was undertaken to write an MCP for MP. This expands on that to encompass all of the API client's functionality.
This still uses the FastMCP package to generate an MCP. There is a 10-line, easy solution for generating an MCP programmatically from an OpenAPI spec. The FastMCP developer recommends this approach only for bootstrapping. Basically LLMs struggle to understand what is important in the API if there is no structure beyond the OpenAPI spec.
A more fleshed-out solution using the (mostly) auto-generated mp_api.mcp.tools is included as well. These basically structure the core functionalities of the API client by both using MPRester's convenience functions, and the search features.
Still needs work and testing
Sample installation for gemini-cli:
git clone https://github.com/esoteric-ephemera/mp_api.git
cd mp_api
git checkout mcp
pip install fastmcp==2.13.0rc1
fastmcp install gemini-cli $(pwd)/mp_api/mcp/server.py --project $(pwd) --with-requirements requirements/requirements-ubuntu-latest_py3.12_extras.txt --python 3.12
note that passing --env MP_API_KEY=<your API key> should work, but haven't been successful with gemini for this.
{
"mcpServers": {
"Materials_Project_MCP": {
"command": "uv",
"args": [
"run",
"--project",
"/Users/aaronkaplan/Desktop/mp_api",
"--with",
"fastmcp",
"--with-requirements",
"/Users/aaronkaplan/Desktop/mp_api/requirements/requirements-ubuntu-latest_py3.12_extras.txt",
"fastmcp",
"run",
"/Users/aaronkaplan/Desktop/mp_api/mp_api/mcp/server.py"
],
"env": {
"MP_API_KEY": "your api key here"
}
}
}
}
@tschaume Re:
Should regenerating the tools be integrated with Github actions and run during a release?
It's FastMCP needs explicit type annotations) and that any new types are imported. Could be done if that's of interest
Before this gets merged, I'd like to test a bit or get feedback from those who are more in the LLM space
@esoteric-ephemera how about Thrust 3 of SciDAC (Knowledge) get involved with the MCP testing? Let me know if you want me to mobilize them, happy to do so as a first concrete action item for that team
Thanks @computron! That might be helpful.
Right now, I have it integrated with Claude desktop, but am running into limits testing it for "enterprise" LLMs (e.g., ChatGPT needs a web-accessible MCP server), especially with the limited tokens you get at the freemium tier
Would be good to know if anyone on SciDAC has worked with Llama 3 or another good open source LLM, or even has integrated an MCP with LBL's CBORG