Support integrating LLMs in Inspector
Updates the sampling specification to explicitly define all request and response fields, so that implementers (and more importantly, MCP application developers) can better understand sampling holistically through its spec, rather than wholly relying on the schema alone.
Motivation and Context
See #503. This change improves the written specification by more accurately describing the current expected request and response behaviors of the server and client, respectively.
How Has This Been Tested?
N/A; validated spec website itself in Mintlify. This PR proposes no schema changes and no behavior changes compared to how clients and servers already behave.
Breaking Changes
None.
Types of changes
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [x] Documentation update
Checklist
- [x] I have read the MCP Documentation
- [x] My code follows the repository's style guidelines
- [x] New and existing tests pass locally
- [x] I have added appropriate error handling
- [x] I have added or updated documentation as needed
Additional context
This PR is in flight alongside #198 and #522, and may need to be adjusted accordingly depending on the order in which these three PRs are merged.
We're trying to build that too with the MCPJam inspector. I also think having LLM support in the inspector would be great. I think that even though this project is owned by Anthropic, I think it should also support other LLMs, not just Claude.
Related #382 #363
Closing as not planned and labeling with v2 in case we want to review for next version.