Anthropic tool use argument error appending to nil
Not entirely sure how to repro this yet, but logging in case others run into, or I can repro with verbose again.
I ran into a case where seemingly message.tool_calls was nil and we're trying to append to it in here:
** (ArgumentError) argument error
:erlang.++(nil, [%LangChain.Message.ToolCall{status: :complete, type: :function, call_id: "toolu_01TAKvP91x1UE9BKHrVwVH5r", name: "my_function", arguments: nil, index: nil}])
(langchain 0.3.1) lib/chat_models/chat_anthropic.ex:702: LangChain.ChatModels.ChatAnthropic.do_process_content_response/2
(elixir 1.18.2) lib/enum.ex:2546: Enum."-reduce/3-lists^foldl/2-0-"/3
(langchain 0.3.1) lib/chat_models/chat_anthropic.ex:389: LangChain.ChatModels.ChatAnthropic.do_api_request/4
(langchain 0.3.1) lib/chat_models/chat_anthropic.ex:324: LangChain.ChatModels.ChatAnthropic.call/3
(langchain 0.3.1) lib/chains/llm_chain.ex:552: LangChain.Chains.LLMChain.do_run/1
(langchain 0.3.1) lib/chains/llm_chain.ex:398: LangChain.Chains.LLMChain.run/2
iex:1: (file)
It somehow looks like Anthropic didn't respect the tool input_schema?
Sample of input/output from Anthropic:
Input
{
"messages": [
{
"content": [
{
"text": "..",
"type": "text"
}
],
"role": "user"
},
..longchain..
{
"content": [
{
"cache_control": {
"type": "ephemeral"
},
"content": "..output of previous tool",
"is_error": false,
"tool_use_id": "toolu_01TznQYPWgXE1Wa6EpFsN3RT",
"type": "tool_result"
}
],
"role": "user"
}
],
"stream": false,
"system": [
{
"cache_control": {
"type": "ephemeral"
},
"text": "..systemprompt..",
"type": "text"
}
],
"tools": [
{
"description": "description",
"input_schema": {
"type": "object",
"required": [
"arg1",
"arg2"
],
"properties": {
"arg2": {
"type": "string",
"description": "arg2"
},
"arg3": {
"type": "string",
"description": "optional"
},
"arg1": {
"type": "string",
"description": "arg1"
}
}
},
"name": "my_function"
},
..othertools..
],
"model": "claude-3-7-sonnet-20250219",
"max_tokens": 4096,
"temperature": 1
}
Output
{
"id": "msg_01EiTqy9J4Sow2d77DX6Km1R",
"type": "message",
"role": "assistant",
"model": "claude-3-7-sonnet-20250219",
"content": [
{
"type": "text",
"text": "..ai.. let's do something:"
},
{
"type": "tool_use",
"id": "toolu_01TAKvP91x1UE9BKHrVwVH5r",
"name": "my_function",
"input": {}
}
],
"stop_reason": "max_tokens",
"stop_sequence": null,
"usage": {
"input_tokens": 0,
"cache_creation_input_tokens": 2179,
"cache_read_input_tokens": 42095,
"output_tokens": 4096
}
}
actually nvm.. just realized the "stop_reason": "max_tokens", 🤦
maybe we should handle this stop_reason with an error or something? 🤔
Hi @ci! Ah, interesting. Yes, there is handling for the stop_reason max_tokens, but I think this surfaced an issue when the it was return during a tool_use message from the assistant. 🤔
The stop_reason should take precedence here. I think we can setup a failing test for this now.
Where should I put in a test for this stop reason, and how do you want to make it @brainlid? maybe I can help
I encountered this recently and have pushed up a commit on the main branch, which is after 0.4.0-rc.0. It will be officially released in the next RC or final version.