OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

[Bug]: "The model produces invalid content"

Open avi12 opened this issue 1 year ago • 14 comments

Is there an existing issue for the same bug?

  • [X] I have checked the existing issues.

Describe the bug and reproduction steps

https://www.all-hands.dev/share?share_id=dab4a77e7d64e7a4dc6124dc672d3f4beb2d411a33155977425b821e292d4f4c The LLM is gpt-4o In the logs I got

{'error': {'message': 'The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.', 'type': 'model_error', 'param': None, 'code': None}}

OpenHands Installation

Docker command in README

OpenHands Version

0.17

Operating System

Windows

Logs, Errors, Screenshots, and Additional Context

No response

avi12 avatar Dec 27 '24 22:12 avi12

The trajectory you linked doesn't show anything for me, except "agent loading". Is that normal? Was this the very first thing in that session?

enyst avatar Dec 27 '24 23:12 enyst

That's the thing, I don't know if the error in the logs has impacted the ability to generate LLM responses for that session

avi12 avatar Dec 28 '24 07:12 avi12

Can you tell what happened before you got that error?

enyst avatar Dec 28 '24 07:12 enyst

The agent attempts to initialise, the message in the UI is "Waiting for client to become available", and the prompt is already available so that it's used as soon as the agent has finished initializing

avi12 avatar Dec 28 '24 08:12 avi12

You ran with the UI, right, with the command in the readme? It normally needs the user to say something first. Sorry, I don't understand if you did. What was your prompt?

enyst avatar Dec 28 '24 08:12 enyst

the command is

docker run
  -p 3000:3000
  --env LOG_ALL_EVENTS=true
  --env SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:main-nikolaik
  --env WORKSPACE_MOUNT_PATH=C:\repositories\extensions\youtube-time-manager
  --name openhands-app
  --pull always
  --add-host host.docker.internal:host-gateway
  -v C:/repositories/extensions/youtube-time-manager:/opt/workspace_base
  -v /var/run/docker.sock:/var/run/docker.sock
  -v ~/.openhands-state:/.openhands-state
  --rm
  docker.all-hands.dev/all-hands-ai/openhands:0.17 

the prompt is

This is a manifest V3 browser extension for Chrome, edge, opera, Firefox call YouTube Time Manager, built with WXT framework.
I've been working on a new subscription-based feature, called VidMatch, that matches users with similar watch patterns.
To make the feature work, I chose for the backend Firebase (under /firebase directory). Note that you need to use NPM and every package.json that you might create has to be of type "module".
Your first task involves combining the projects under /firebase to have a single package.json, and having /firebase/firebase.json manage the entire project, while making it work cross-platform, both Linux and Windows.
Note that currently all of the TypeScript types that are used both under /firebase and /extension are stored in /firebase/functions/src/types.
If you end up moving the types around, ensure that any of the imported types on both the Firebase end and the extension files (TypeScript and Svelte) are imported properly.
After you're done with coding tasks, run npx eslint --fix */**/*.ts on the /firebase and fix any errors that the ESLint compiler cannot fix. Do the same for npx prettier -w */**/*.ts.
Never create ESLint configs. If you need it to make an eslint.config.js, you must import the config from /extension/eslint.config.js.
In addition, once you're done you need to test and the ability to deploy Firebase Functions. After installing firebase-tools and cross-env as dev dependencies, you can create an env var GOOGLE_APPLICATION_CREDENTIALS that points at /workspace/firebase/time-manager-726ab.json and deploy with firebase deploy, for example cross-env GOOGLE_APPLICATION_CREDENTIALS="/workspace/firebase/time-manager-726ab.json" && firebase deploy --only functions
when editing files, do not clone them and suffix the new filename with .new. edit the file directly

avi12 avatar Dec 28 '24 09:12 avi12

I'd preferably want to use PMPM but considering its symlink nature, it will be easier for the agent to deal with NPM

avi12 avatar Dec 28 '24 09:12 avi12

Probably unrelated, but from what I see, you're not running in WSL, but trying to run on Windows directly?

If you run again, with this prompt, does it still happen? If yes, could you please get a little more logs, to see the full error, and perhaps step?

enyst avatar Dec 28 '24 09:12 enyst

Probably unrelated, but from what I see, you're not running in WSL, but trying to run on Windows directly?

Yes, works flawlessly

If you run again, with this prompt, does it still happen?

Sometimes it happens, sometimes it doesn't. I tried both running GPT-4o, GPT-4o Turbo and Claude 3.5 Sonnet with this prompt I've never gotten this error with Claude;, but I try to rely on OpenAI's models due to Anthropic's rate limiter which I've unsuccessfully worked around

avi12 avatar Dec 28 '24 10:12 avi12

Please if you try again, use -e DEBUG=1 added to the command, to get more logs from the LLM error. I've never seen this error I don't think, and I don't see anything that could affect make it happen.

enyst avatar Dec 28 '24 11:12 enyst

I have the same problem and I get it after a simple prompt. I am using macos and LLM OpenAI o1-mini

srcHan-u avatar Jan 08 '25 17:01 srcHan-u

I see it, and it comes from OpenAI. Looking into why.

enyst avatar Jan 08 '25 22:01 enyst

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] avatar Feb 11 '25 01:02 github-actions[bot]

To note, this is literally the OpenAI API sometimes returning this error, for the same prompt that worked fine before. It's weird, and sometimes when people encountered it they seem to have found some causes (format issues), and other times nothing. It does seem random from the API.

enyst avatar Feb 16 '25 04:02 enyst