[Bug]: chatting with assistant broke
Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://docs.all-hands.dev/modules/usage/troubleshooting
- [X] I have checked the existing issues.
Describe the bug
when chatting with assistant, I always get the following error:
Agent encountered an error while processing the last action. Error: APIError: litellm.APIError: APIError: OpenAIException - 'str' object has no attribute 'model_dump' Please try again.
Current OpenHands version
0.9
Installation and Configuration
as in quick start guide:
export WORKSPACE_BASE=$(pwd)/workspace
docker run -it --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=ghcr.io/all-hands-ai/runtime:0.9-nikolaik \
-e SANDBOX_USER_ID=$(id -u) \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app-$(date +%Y%m%d%H%M%S) \
ghcr.io/all-hands-ai/openhands:0.9
Model and Agent
gpt-4 with proxy, codeactagent
Operating System
No response
Reproduction Steps
No response
Logs, Errors, Screenshots, and Additional Context
No response
this can go around with base url setting to https://yourhost/v1. As with openai proxy, v1 is needed for base url.
Hi gaord! Just want to understand, you are saying if you set the Base URL, it works? Or it still doesn't work?
If you have a proxy setup, the Base URL must be specified.
it works
@gaord do you have by any chance any more logs from the container with that error message?
If you are running a proxy, you must set a base URL. See docs: https://docs.all-hands.dev/modules/usage/llms/openai-llms#using-an-openai-proxy
it would be helpful to update documents on how to set base_url correctly.
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
This issue was closed because it has been stalled for over 30 days with no activity.