mcp stdio client examples run into errors
I want to test mcp examples and only change the openai model config to use deepseek-chat model, then the mcp stdio example output the following errors. Does it have to do with model type?
command:
python atomic-examples/mcp-agent/example-client/example_client/main.py
output:
[04/26/25 17:35:00] INFO Processing request of type ListToolsRequest server.py:534
Initializing MCP Agent System (STDIO mode)...
Available MCP Tools
Tool Name Input Schema Description
AddNumbers AddNumbersInputSchema Adds two numbers (number1 + number2) and returns the sum
SubtractNumbers SubtractNumbersInputSchema Subtracts the second number from the first number (number1 - number2) and returns the difference
MultiplyNumbers MultiplyNumbersInputSchema Multiplies two numbers (number1 * number2) and returns the product
DivideNumbers DivideNumbersInputSchema Divides the first number (dividend) by the second number (divisor) and returns the quotient. Handles
division by zero.
• Creating orchestrator agent...
Successfully created orchestrator agent.
MCP Agent Interactive Chat (STDIO mode). Type 'exit' or 'quit' to leave.
You: 1+1=?
Orchestrator reasoning: The user's query is a simple arithmetic addition problem. The available tool 'AddNumbers' can be used to compute the sum of two
numbers. The query provides the numbers 1 and 1, which are the required inputs for the tool. {"number1":1.0,"number2":1.0,"tool_name":"AddNumbers"}
Executing tool: AddNumbers
Parameters: {'number1': 1.0, 'number2': 1.0, 'tool_name': 'AddNumbers'}
[04/26/25 17:35:10] INFO Processing request of type CallToolRequest server.py:534
Result: [TextContent(type='text', text='{"sum": 2.0, "error": null}', annotations=None)]
Error processing query: Tool name does not match
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /opt/anaconda3/envs/rag/lib/python3.12/site-packages/instructor/retry.py:174 in retry_sync │
│ │
│ 171 │ │ │ │ │ │ response=response, total_usage=total_usage │
│ 172 │ │ │ │ │ ) │
│ 173 │ │ │ │ │ │
│ ❱ 174 │ │ │ │ │ return process_response( # type: ignore │
│ 175 │ │ │ │ │ │ response=response, │
│ 176 │ │ │ │ │ │ response_model=response_model, │
│ 177 │ │ │ │ │ │ validation_context=context, │
│ │
│ /opt/anaconda3/envs/rag/lib/python3.12/site-packages/instructor/process_response.py:172 in │
│ process_response │
│ │
│ 169 │ │ ) │
│ 170 │ │ return model │
│ 171 │ │
│ ❱ 172 │ model = response_model.from_response( │
│ 173 │ │ response, │
│ 174 │ │ validation_context=validation_context, │
│ 175 │ │ strict=strict, │
│ │
│ /opt/anaconda3/envs/rag/lib/python3.12/site-packages/instructor/function_calls.py:258 in │
│ from_response │
│ │
│ 255 │ │ │ Mode.CEREBRAS_TOOLS, │
│ 256 │ │ │ Mode.FIREWORKS_TOOLS, │
│ 257 │ │ }: │
│ ❱ 258 │ │ │ return cls.parse_tools(completion, validation_context, strict) │
│ 259 │ │ │
│ 260 │ │ if mode in { │
│ 261 │ │ │ Mode.JSON, │
│ │
│ /opt/anaconda3/envs/rag/lib/python3.12/site-packages/instructor/function_calls.py:527 in │
│ parse_tools │
│ │
│ 524 │ │ ), f"Instructor does not support multiple tool calls, use List[Model] instead" │
│ 525 │ │ tool_call = message.tool_calls[0] # type: ignore │
│ 526 │ │ assert ( │
│ ❱ 527 │ │ │ tool_call.function.name == cls.openai_schema["name"] # type: ignore[index] │
│ 528 │ │ ), "Tool name does not match" │
│ 529 │ │ return cls.model_validate_json( │
│ 530 │ │ │ tool_call.function.arguments, # type: ignore │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AssertionError: Tool name does not match
The above exception was the direct cause of the following exception:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /opt/anaconda3/envs/rag/lib/python3.12/site-packages/instructor/retry.py:163 in retry_sync │
│ │
│ 160 │ │
│ 161 │ try: │
│ 162 │ │ response = None │
│ ❱ 163 │ │ for attempt in max_retries: │
│ 164 │ │ │ with attempt: │
│ 165 │ │ │ │ logger.debug(f"Retrying, attempt: {attempt.retry_state.attempt_number}") │
│ 166 │ │ │ │ try: │
│ │
│ /opt/anaconda3/envs/rag/lib/python3.12/site-packages/tenacity/__init__.py:445 in __iter__ │
│ │
│ 442 │ │ │
│ 443 │ │ retry_state = RetryCallState(self, fn=None, args=(), kwargs={}) │
│ 444 │ │ while True: │
│ ❱ 445 │ │ │ do = self.iter(retry_state=retry_state) │
│ 446 │ │ │ if isinstance(do, DoAttempt): │
│ 447 │ │ │ │ yield AttemptManager(retry_state=retry_state) │
│ 448 │ │ │ elif isinstance(do, DoSleep): │
│ │
│ /opt/anaconda3/envs/rag/lib/python3.12/site-packages/tenacity/__init__.py:378 in iter │
│ │
│ 375 │ │ self._begin_iter(retry_state) │
│ 376 │ │ result = None │
│ 377 │ │ for action in self.iter_state.actions: │
│ ❱ 378 │ │ │ result = action(retry_state) │
│ 379 │ │ return result │
│ 380 │ │
│ 381 │ def _begin_iter(self, retry_state: "RetryCallState") -> None: # noqa │
│ │
│ /opt/anaconda3/envs/rag/lib/python3.12/site-packages/tenacity/__init__.py:421 in exc_check │
│ │
│ 418 │ │ │ │ retry_exc = self.retry_error_cls(fut) │
│ 419 │ │ │ │ if self.reraise: │
│ 420 │ │ │ │ │ raise retry_exc.reraise() │
│ ❱ 421 │ │ │ │ raise retry_exc from fut.exception() │
│ 422 │ │ │ │
│ 423 │ │ │ self._add_action_func(exc_check) │
│ 424 │ │ │ return │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RetryError: RetryError[<Future at 0x10f37fa10 state=finished raised AssertionError>]
The above exception was the direct cause of the following exception:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/ever/Documents/unique_project/atomic-agents/atomic-examples/mcp-agent/example-client/exam │
│ ple_client/main_stdio.py:211 in main │
│ │
│ 208 │ │ │ │ │ orchestrator_agent.memory.add_message("system", result_message) │
│ 209 │ │ │ │ │ │
│ 210 │ │ │ │ │ # Run the agent again without parameters to continue the flow │
│ ❱ 211 │ │ │ │ │ orchestrator_output = orchestrator_agent.run() │
│ 212 │ │ │ │ │ action_instance = orchestrator_output.action │
│ 213 │ │ │ │ │ reasoning = orchestrator_output.reasoning │
│ 214 │ │ │ │ │ console.print(f"[cyan]Orchestrator reasoning:[/cyan] {reasoning}") │
│ │
│ /Users/ever/Documents/unique_project/atomic-agents/atomic-agents/atomic_agents/agents/base_agent │
│ .py:198 in run │
│ │
│ 195 │ │ │ self.current_user_input = user_input │
│ 196 │ │ │ self.memory.add_message("user", user_input) │
│ 197 │ │ │
│ ❱ 198 │ │ response = self.get_response(response_model=self.output_schema) │
│ 199 │ │ self.memory.add_message("assistant", response) │
│ 200 │ │ │
│ 201 │ │ return response │
│ │
│ /Users/ever/Documents/unique_project/atomic-agents/atomic-agents/atomic_agents/agents/base_agent │
│ .py:174 in get_response │
│ │
│ 171 │ │ │
│ 172 │ │ self.messages += self.memory.get_history() │
│ 173 │ │ │
│ ❱ 174 │ │ response = self.client.chat.completions.create( │
│ 175 │ │ │ messages=self.messages, │
│ 176 │ │ │ model=self.model, │
│ 177 │ │ │ response_model=response_model, │
│ │
│ /opt/anaconda3/envs/rag/lib/python3.12/site-packages/instructor/client.py:180 in create │
│ │
│ 177 │ ) -> T | Any | Awaitable[T] | Awaitable[Any]: │
│ 178 │ │ kwargs = self.handle_kwargs(kwargs) │
│ 179 │ │ │
│ ❱ 180 │ │ return self.create_fn( │
│ 181 │ │ │ response_model=response_model, │
│ 182 │ │ │ messages=messages, │
│ 183 │ │ │ max_retries=max_retries, │
│ │
│ /opt/anaconda3/envs/rag/lib/python3.12/site-packages/instructor/patch.py:193 in new_create_sync │
│ │
│ 190 │ │ │
│ 191 │ │ new_kwargs = handle_templating(new_kwargs, mode=mode, context=context) │
│ 192 │ │ │
│ ❱ 193 │ │ response = retry_sync( │
│ 194 │ │ │ func=func, # type: ignore │
│ 195 │ │ │ response_model=response_model, │
│ 196 │ │ │ context=context, │
│ │
│ /opt/anaconda3/envs/rag/lib/python3.12/site-packages/instructor/retry.py:194 in retry_sync │
│ │
│ 191 │ │ │ │ │ raise e │
│ 192 │ except RetryError as e: │
│ 193 │ │ logger.debug(f"Retry error: {e}") │
│ ❱ 194 │ │ raise InstructorRetryException( │
│ 195 │ │ │ e.last_attempt._exception, │
│ 196 │ │ │ last_completion=response, │
│ 197 │ │ │ n_attempts=attempt.retry_state.attempt_number, │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
InstructorRetryException: Tool name does not match
and after searching about 'deepseek' in issues i found https://github.com/BrainBlend-AI/atomic-agents/issues/72 and and mode params
client = instructor.from_openai(openai.OpenAI(
api_key=config.openai_api_key,
base_url=config.openai_base_url
),
mode=instructor.Mode.JSON_SCHEMA
)
but this time the program just run into infinite loop:
Result: [TextContent(type='text', text='{"sum": 2.0, "error": null}', annotations=None)]
{"number1":1.0,"number2":1.0,"tool_name":"AddNumbers"}
Orchestrator reasoning: The user's query is a simple arithmetic question asking for the sum of 1 and 1. The 'AddNumbers' tool is designed to perform
exactly this operation, making it the appropriate choice for handling the query. By extracting the numbers 1 and 1 from the query, we can use the
'AddNumbers' tool to compute the sum.
Executing tool: AddNumbers
Parameters: {'number1': 1.0, 'number2': 1.0, 'tool_name': 'AddNumbers'}
[04/26/25 17:46:45] INFO Processing request of type CallToolRequest server.py:534
Result: [TextContent(type='text', text='{"sum": 2.0, "error": null}', annotations=None)]
{"number1":1.0,"number2":1.0,"tool_name":"AddNumbers"}
Orchestrator reasoning: The user's query is a simple arithmetic question asking for the sum of 1 and 1. The 'AddNumbers' tool is perfectly suited to
handle this request as it is designed to add two numbers together. Therefore, I will use the 'AddNumbers' tool to provide the answer.
Executing tool: AddNumbers
Parameters: {'number1': 1.0, 'number2': 1.0, 'tool_name': 'AddNumbers'}
[04/26/25 17:46:53] INFO Processing request of type CallToolRequest server.py:534
Result: [TextContent(type='text', text='{"sum": 2.0, "error": null}', annotations=None)]
{"number1":1.0,"number2":1.0,"tool_name":"AddNumbers"}
Orchestrator reasoning: The user's query '1+1=?' is a basic arithmetic question that can be directly addressed by adding the two numbers. The
'AddNumbers' tool is specifically designed for this purpose, so I will use it to compute the sum.
Executing tool: AddNumbers
Parameters: {'number1': 1.0, 'number2': 1.0, 'tool_name': 'AddNumbers'}
[04/26/25 17:47:00] INFO Processing request of type CallToolRequest server.py:534
Result: [TextContent(type='text', text='{"sum": 2.0, "error": null}', annotations=None)]
^C
It could definitely be that the model is just not good enough to work properly with MCP
That being said, I did also push a small potential fix, could you pull in the latest main branch and test it again, without the JSON mode first?
If that does not work, definitely give the deepseek-reasoner model a shot instead
@KennyVaneetvelde Thanks for reply.
Have tested it, deepseek-chat got "Tool name does not match" without json mode, and infinite loop with json mode. deepseek-reasoner got "InstructorRetryException: Error code: 400 - {'error': {'message': 'deepseek-reasoner does not support Function Calling', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}}" in both modes.
So it means deepseek model is not good enough to work properly with MCP? but I have tested it with pydantic_ai agent, it can get right answer with a third-party map mcp server.
@luoyu2015 Hmm alright thanks for testing, nah if deepseek works with another library, there must be something else going on, we'll have a look!
Closing as I was never really able to replicate this