5ire icon indicating copy to clipboard operation
5ire copied to clipboard

[Bug] JSON format errors occur frequently when using Tools

Open xiaowen581 opened this issue 6 months ago • 2 comments

Describe the bug The parameter is often "undefined" when passing arguments to the tool, which then triggers a JSON format error popup in the top-right corner (at this point, the "Allow" button has not yet been clicked). The reproduction rate is close to 50%.

error message:

Expected ':' after property name in JSON at position 5 (line 1 column 6)

inputSchema:

{
  "type": "object",
  "properties": {
    "name": {
      "description": "Name of the person to check for existence. Case-sensitive.",
      "title": "Name",
      "type": "string"
    }
  },
  "required": [
    "name"
  ],
  "title": "check_person_existenceArguments"
}

To Reproduce Steps to reproduce the behavior: start the tool and chat with OpenAI's gpt-4.1.

Screenshots

  1. before click "Allow" Image

  2. after click "Allow" Image

Image

Desktop (please complete the following information):

  • OS: Windows11
  • Version 0.14.1

** Others Although may be unrelated to this issue, the following error were also encountered during use: · Cannot read properties of undefined (reading 'split') · Unexpected token 'q', 'quired par"... is not valid JSON · Unexpected token 'e', 'equired pa"... is not valid JSON · Unexpected non-whitespace ... · Missing required parameter: 'messages[14].tool_calls[0].function.arguments'.

xiaowen581 avatar Oct 10 '25 10:10 xiaowen581

The behavior you described likely happens when the model fails to correctly populate the name field in the tool arguments.

In these cases, the model produces { "name": undefined } or {}, which leads the client to render "undefined" in the tool confirmation dialog and throw a JSON parse error before the “Allow” button is clicked.

The LLM cannot always infer a valid value for name from user input (e.g., when the prompt includes multiple or ambiguous names like “Tom and Jerry”). Since undefined is not valid JSON, the UI preview fails to parse it.

You can improve reliability by making the schema description more explicit about the expected input, for example:

{
  "type": "object",
  "properties": {
    "name": {
      "type": "string",
      "description": "Enter exactly one person's name (case-sensitive). For multiple names, call this tool separately for each person."
    }
  },
  "required": ["name"],
  "title": "check_person_existenceArguments"
}

Please let me know if this still occurs after updating the schema — happy to continue debugging together.

nanbingxyz avatar Oct 11 '25 02:10 nanbingxyz

Please let me know if this still occurs after updating the schema — happy to continue debugging together.

It still occurs (JSON error occurred 9 times out of 20 attempts.). However, switching to a more effective model does indeed solve the problem.

Since the output of LLMs cannot be controlled, I think there is room for improvement in the UI. For example, retrying after an error, or providing more user-friendly error messages such as "Please try again," instead of developer-oriented messages like "JSON format error."

xiaowen581 avatar Oct 11 '25 05:10 xiaowen581