opencode icon indicating copy to clipboard operation
opencode copied to clipboard

LiteLLM error: Anthropic doesn't support tool calling without tools= param specified

Open jessevdp opened this issue 6 months ago • 13 comments

I'm running into some issues related to my provider: LiteLLM proxy. I realize this is a somewhat niche case, sorry 😬. See my config below.

Is this somehow something I can resolve client side?

AI_APICallError: litellm.UnsupportedParamsError: Anthropic doesn't support tool calling without tools= param specified. Pass tools= param OR set litellm.modify_params = True // litellm_settings::modify_params: True to add dummy tool to the request.

Dummy session with the error (error not shown in web UI for some reason?): https://opencode.ai/s/SWAsOXyz

Image

My config

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "my-litellm-provider": {
      "name": "My LiteLLM Provider",
      "npm": "@ai-sdk/openai-compatible",
      "options": {
        "baseURL": "https://my-provider.com",
        "apiKey": "...",
        "includeUsage": true
      },
      "models": {
        "claude-4-5-sonnet": {
          "name": "Anthropic Claude 4.5 Sonnet",
          "release_date": "2025-09-29",
          "attachment": true,
          "reasoning": true,
          "temperature": true,
          "tool_call": true,
          "limit": {
            "context": 200000,
            "output": 64000
          },
          "options": {
            "thinking": {
              "type": "enabled",
              "budget_tokens": 16000,
            },
          }
        },
      }
    }
  }
}

jessevdp avatar Oct 02 '25 06:10 jessevdp

It seems to only happen / error with the thinking options added.

jessevdp avatar Oct 02 '25 07:10 jessevdp

@jessevdp is your proxy dropping metadata? it seems like the issue is on your proxy rather than opencode.

If you turn on reasoning we need to keep the reasoning chunks otherwise anthropic with return error

rekram1-node avatar Oct 02 '25 21:10 rekram1-node

Thanks for getting back to me!

I might have been too quick to come to conclusions. I turned off thinking mode because this LiteLLM proxy distributes between Google Vertex and AWS Bedrock and it seems when forwarding to Vertex thinking mode was giving an error.

(I left a comment on a closed, seemingly sorta related issue https://github.com/sst/opencode/issues/2599#issuecomment-3359320262)

After turning thinking mode off I was back to not getting those tools related error messages and continued work. But later in the day the error messages returned.

What might have happened is that earlier in the day the issue was still there, but LiteLLM was falling back / retrying on Vertex (Bedrock is our main provider) and since thinking was now disabled those requests would succeed. We might have simply run out of Vertex capacity later that day.

What’s interesting is that whenever I get this error and I just type “go” the agent seems fine to just carry on. But that might again just be related to capacity on Google Vertex I suppose…


Anyway, I probably need to do some more digging to figure out what’s happening.

Is there some way to enable a nice HTTP log? That way I can inspect the request that resulted in this issue. I’ve tried log level DEBUG but that doesn’t seem to be it.

jessevdp avatar Oct 03 '25 05:10 jessevdp

hm dont think we do any http logs now? I feel like we used to do it may still be possible id have to check

rekram1-node avatar Oct 03 '25 13:10 rekram1-node

I am also encountering this issue. This appears to occur consistently when the session is compressed either manually or automatically upon reaching the context limit. Otherwise, the system functions flawlessly. Given that it appears to be triggered by compression, it does not appear to be related to using the Lite LLM Proxy.

I wanted to give you a heads up about the configuration. This is the main agent that uses subagents. I'm wondering if there's anything to do with the subagent sessions?

When running without subagents manual compaction works fine.

Image

mschenk42 avatar Oct 25 '25 15:10 mschenk42

would it be fixed if we just passed an empty tools array when we compact?

rekram1-node avatar Oct 25 '25 22:10 rekram1-node

Yeah, based on the error message that seems like it would resolve the problem.

mschenk42 avatar Oct 27 '25 16:10 mschenk42

ill try that

rekram1-node avatar Oct 27 '25 16:10 rekram1-node

Should be fixed by: https://github.com/sst/opencode/commit/0af450575647fc906f017b0065fe3aca227c369f

Will release soon

rekram1-node avatar Oct 27 '25 19:10 rekram1-node

Unfortunately that didn't work. I'm going to reach out to our team that manages our Lite LLM Proxy. Thanks for trying to resolve. I'll share here if I find a way to resolve this.

mschenk42 avatar Oct 28 '25 13:10 mschenk42

thank you for heads up

rekram1-node avatar Oct 28 '25 14:10 rekram1-node

A long time ago, Anthropic models would throw an error if you submitted a completions request w/ a message history containing tool calls & results - but don't pass in any tools. This is what happens during compaction, where we send the existing conversation history + a summary prompt, but disable all tools, i.e. completion([...existing_history_with_tools, summarize_prompt])

https://github.com/BerriAI/litellm/blob/c45fad3855847715afaebba71926f8e84eb7b355/litellm/llms/anthropic/chat/transformation.py#L672-L686

        if (
            "tools" not in optional_params
            and messages is not None
            and has_tool_call_blocks(messages)
        ):
            if litellm.modify_params:
                optional_params["tools"], _ = self._map_tools(
                    add_dummy_tool(custom_llm_provider="anthropic")
                )
            else:
                raise litellm.UnsupportedParamsError(
                    message="Anthropic doesn't support tool calling without `tools=` param specified. Pass `tools=` param OR set `litellm.modify_params = True` // `litellm_settings::modify_params: True` to add dummy tool to the request.",
                    model="",
                    llm_provider="anthropic",
                )

This is no longer the case (evidenced by the fact that compaction works fine via AI SDK). In the short term you can fix this on the LiteLLM side by running the proxy with modify_params=True, as the error message implies. This will add a dummy tool to the request in this case

https://github.com/BerriAI/litellm/blob/c45fad3855847715afaebba71926f8e84eb7b355/litellm/utils.py#L6809-L6827

def add_dummy_tool(custom_llm_provider: str) -> List[ChatCompletionToolParam]:
    """
    Prevent Anthropic from raising error when tool_use block exists but no tools are provided.

    Relevent Issues: https://github.com/BerriAI/litellm/issues/5388, https://github.com/BerriAI/litellm/issues/5747
    """
    return [
        ChatCompletionToolParam(
            type="function",
            function=ChatCompletionToolParamFunctionChunk(
                name="dummy_tool",
                description="This is a dummy tool call",  # provided to satisfy bedrock constraint.
                parameters={
                    "type": "object",
                    "properties": {},
                },
            ),
        )
    ]

But obviously this is suboptimal because now your arguments can be massaged in unknown ways.

tldr: not an opencode problem, litellm has a lot of footguns

hewliyang avatar Nov 05 '25 16:11 hewliyang

For me, the native pass-through works much better

  "provider": {
    "my-litellm": {
    "my-anthropic": {
      "npm": "@ai-sdk/anthropic",
      "name": "my LiteLLM (Anthropic native)",
      "options": {
        "baseURL": "https://llm.my.corp.com/anthropic/v1",
        "apiKey": "{env:MY_KEY}"
      },

geoHeil avatar Dec 29 '25 09:12 geoHeil

I understand this is not an opencode problem directly but its hard to make changes on existing deployments in large companies. This is not the end of the world since a new session is usually fine to continue existing work but a workaround in the config would be nice, maybe a way to set a dummy tool in all calls so compaction works?

danielfrg avatar Jan 13 '26 05:01 danielfrg