vscode icon indicating copy to clipboard operation
vscode copied to clipboard

MCP: Improve/revise tool call before accepting

Open elsewhat opened this issue 8 months ago • 5 comments

I'm investigating how to use generative AI within our Software Delivery Lifecycle. My goal is to evolve away from the COPY+PASTE interaction we have with LLMs today.

MCP and GitHub CoPilot are an excellent fit and I have setup a combination of standard and custom made MCP server for our use case using VS Code insiders.

Based on my experience so far there is one key enhancement that would elevated the potential for MCP even more.

When my prompt triggers a MCP server tool calls which represent a write action outside the workspace (for example create a GitHub issue, post a message to the team slack channel, create data in knowledge database), I want to have the possibility of improving/revise it.

Ideally, I would like a Improve functionality added to the tool invocation confirmation dialog Image

The main purpose for the tool invocation confirmation dialog is to

Present confirmation prompts to the user for operations, to ensure a human is in the loop

ref https://spec.modelcontextprotocol.io/specification/2025-03-26/server/tools/#user-interaction-model

However, what is the appropriate end-user action if the tool call itself is correct, but the arguments to the tool should be revised and improved upon? For the end-user now, this is not apparent.

For example, if I type in the chat interface when a tool invocation confirmation dialog is showing, does the LLM have the tool call details in the context window ?

If I cancel the tool invocation confirmation dialog with the intention of improving it, is the cancelled tool call in the context window or is it lost ?

Adding an Improve functionality to tool invocation confirmation dialog would make this more clear for the end-user

From a technical point of view it's not straightforward to figure out how to handle such an Improve functionality. The most important thing is to ensure the tool call details are kept in the context window and allow the user to iterate on it using normal chat . In my example, I would like to be able to use chat to trigger the sequential thinking on top of the tool call suggestion to see if it matches my facts and the template defined by an MCP server.

elsewhat avatar Apr 02 '25 07:04 elsewhat

The tool input is editable, you should be able to click into it and change things. Does that solve your use case?

connor4312 avatar Apr 02 '25 16:04 connor4312

Thanks, it does help!

But for many use cases with updating of data you will not work with just simple arguments to the tool, but with text bodies which are not suitable for inline editing of the JSON structure.

Could you also clarify how the chat context window is affected by:

  1. Cancelled tool calls
  2. Additional chat prompting without accepting or cancelling
  3. Accepted tools

(ie. in which situation is the data in the tool call available in the remainder of the chat conversation)

elsewhat avatar Apr 02 '25 17:04 elsewhat

This feature request is now a candidate for our backlog. The community has 60 days to upvote the issue. If it receives 20 upvotes we will move it to our backlog. If not, we will close it. To learn more about how we handle feature requests, please see our documentation.

Happy Coding!

But for many use cases with updating of data you will not work with just simple arguments to the tool, but with text bodies which are not suitable for inline editing of the JSON structure.

At the end of the day it only JSON that the LLM understands and I don't know if it's worth to invest in a UI editor for JSON structures. I understand and share pain of authoring multiline JSON strings but this seems rather rare

jrieken avatar Apr 02 '25 19:04 jrieken

This feature request has not yet received the 20 community upvotes it takes to make to our backlog. 10 days to go. To learn more about how we handle feature requests, please see our documentation.

Happy Coding!

:slightly_frowning_face: In the last 60 days, this feature request has received less than 20 community upvotes and we closed it. Still a big Thank You to you for taking the time to create this issue! To learn more about how we handle feature requests, please see our documentation.

Happy Coding!