Timeout should be configurable
In an evaluation env we do have only CPU based models running which leads certainly to long response times.
In many cases baibot failes due to timeout.
What kind of timeout error are you hitting and with which provider?
I've taken a quick look at the major providers we have (openai, openai-compatible and anthropic) and it seems like:
- we don't specify a timeout anywhere
- all these providers use the reqwest library under the hood with a default configuration (no timeouts)
So.. maybe you're hitting some other timeout somewhere else. More information is needed.
That's interesting.
Having requests in the mentioned env last longer than 60 seconds baibot fails with ...
task ... panicked with message "called Result::unwrap() on an Err value: Custom { kind: InvalidData, error: "Failed to read JSON: expected value at line 1 column 1" }"
Where I could see (and reproduce with curl), that the request (within the n8n workaround I mentioned in the other issue) get completed normally by AnythingLLM and returns the appropriate data ... several seonds after having baibot already fail.
This error seems to indicate an invalid JSON response. Maybe it's your reverse-proxy which kills the request after 60 seconds and responds with some kind of HTML.
One would think that; but the same error message occurs if response is empty
Like said, ...
- If using curl on the proxy everything works find.
- Via baibot I could see that n8n still waits for the response and then renders the response to baibot correctly ((full)logs)