holmesgpt
holmesgpt copied to clipboard
litellm.exceptions.UnsupportedParamsError with Bedrock on Final LLM Call
Description:
When using Holmes GPT with an AWS Bedrock model for investigation, an UnsupportedParamsError from litellm can occur. This seems to happen specifically during the final step of the investigation loop within ToolCallingLLM, when Holmes attempts to get a final text response from the LLM after potentially multiple tool calls.
I am using it with bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0
Traceback:
Running tool kubectl_get_by_kind_in_namespace: kubectl get tools.py:127
--show-labels -o wide hpa -n otcollector
Traceback (most recent call last):
File "/Users/ozviriuk/Applications/PyCharm Community Edition.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py", line 1570, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ozviriuk/Applications/PyCharm Community Edition.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/ozviriuk/src/holmesgpt-test/vendor/holmesgpt/holmes/main.py", line 928, in <module>
run()
File "/Users/ozviriuk/src/holmesgpt-test/vendor/holmesgpt/holmes/main.py", line 924, in run
app()
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/typer/main.py", line 338, in __call__
raise e
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/typer/main.py", line 321, in __call__
return get_command(self)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/typer/core.py", line 728, in main
return _main(
^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/typer/core.py", line 197, in _main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/typer/main.py", line 703, in wrapper
return callback(**use_params)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ozviriuk/src/holmesgpt-test/vendor/holmesgpt/holmes/main.py", line 305, in ask
response = ai.prompt_call(system_prompt, prompt, post_processing_prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ozviriuk/src/holmesgpt-test/vendor/holmesgpt/holmes/core/tool_calling_llm.py", line 120, in prompt_call
return self.call(
^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/sentry_sdk/tracing_utils.py", line 823, in func_with_tracing
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/ozviriuk/src/holmesgpt-test/vendor/holmesgpt/holmes/core/tool_calling_llm.py", line 174, in call
full_response = self.llm.completion(
^^^^^^^^^^^^^^^^^^^^
File "/Users/ozviriuk/src/holmesgpt-test/vendor/holmesgpt/holmes/core/llm.py", line 215, in completion
result = litellm.completion(
^^^^^^^^^^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/litellm/utils.py", line 1247, in wrapper
raise e
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/litellm/utils.py", line 1125, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/litellm/main.py", line 3148, in completion
raise exception_type(
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/litellm/main.py", line 2664, in completion
response = bedrock_converse_chat_completion.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/litellm/llms/bedrock/chat/converse_handler.py", line 377, in completion
_data = litellm.AmazonConverseConfig()._transform_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/litellm/llms/bedrock/chat/converse_transformation.py", line 568, in _transform_request
_data: CommonRequestObject = self._transform_request_helper(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/venv/holmesgpt/lib/python3.12/site-packages/litellm/llms/bedrock/chat/converse_transformation.py", line 445, in _transform_request_helper
raise litellm.UnsupportedParamsError(
litellm.exceptions.UnsupportedParamsError: litellm.UnsupportedParamsError: Bedrock doesn't support tool calling without `tools=` param specified. Pass `tools=` param OR set `litellm.modify_params = True` // `litellm_settings::modify_params: True` to add dummy tool to the request.
Should we add more validations for that to handle it better?