OpenHands
OpenHands copied to clipboard
Improve timeout handling and feedback for slow local LLMs
What problem or use case are you trying to solve?
When using OpenHands with LM Studio, long generations from local models can exceed the default timeout. This causes OpenHands to disconnect before the response is ready, resulting in a loop of retry attempts with no clear explanation or feedback.
Describe the UX or technical implementation you have in mind
- Expose an LLM timeout setting in the LLM settings panel when Advanced mode is enabled
- Improve retry messages to clearly indicate the cause of failure (e.g. “LLM timeout exceeded”)
Additional context
I fixed the issue by setting the LLM_TIMEOUT environment variable to a higher value via Docker, based on advice from a developer. The workaround works, but most users wouldn’t know to try this without guidance.