[Bug]: Claude-3.7 'thinking' parameter not support with litellm_proxy
What happened?
I'm using litellm_proxy and custom base_url. The request looks like this:
response = litellm.completion(
api_key=os.environ.get("LITELLM_API_KEY"),
base_url=MY_URL,
model='litellm_proxy/xxx/claude-3-7-sonnet-20250219',
messages=xxx,
temperature=0.0,
thinking={
"type": "enabled",
"budget_tokens": 200
}
)
I got UnsupportedParamsError:
litellm.exceptions.UnsupportedParamsError: litellm.UnsupportedParamsError: litellm_proxy does not support parameters: {'thinking': {'type': 'enabled', 'budget_tokens': 200}}, for model=xxx/claude-3-7-sonnet-20250219. To drop these, set `litellm.drop_params=True` or for proxy:
`litellm_settings:
drop_params: true`
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.61.20
Twitter / LinkedIn details
No response
scoping
@cauchy221 I was able to use Claude 3.7 Sonnet w/ thinking on a proxy server using this code:
import litellm
response = litellm.completion(
api_key="sk-************", # The master_key from config
base_url="http://localhost:4000",
model='claude-3-7-sonnet-20250219',
messages=[{"role": "user", "content": "How can I solve world hunger?"}],
max_tokens=2001,
temperature=1,
thinking={
"type": "enabled",
"budget_tokens": 2000
}
)
print(response)
Important requirements for Claude thinking parameter:
budget_tokensmust be ≥ 1024- When
thinkingis enabled,temperaturemust be set to 1
can you try this and see if it solves your issue?
@colesmcintosh the user wants to pass model=litellm_proxy/<their-model>
since we specify it here: https://docs.litellm.ai/docs/providers/litellm_proxy
Ahhh I see, was able to reproduce the error after adding litellm_proxy/ - checking now
Same issue here. Happy to help test if a fix is in progress.
Closing as this is now fixed.
Closing as this is now fixed.
Thanks @krrishdholakia - could you confirm which commit/release this was fixed in?