0xlws2

Results 7 comments of 0xlws2

the issue is that the model outputs extra text after the valid JSON output, in my case the system prompt was i think too complex for a 7b model to...

using a large enough model that follows the instructions accurately i think

[#99](https://github.com/bytedance/deer-flow/issues/99#issuecomment-2901743525) i think its the same issue, in my case the error was because the llm didnt respond in the requested json format * had to simplify the system prompt

also depends a lot on the model and its size ofcourse, but with the reporter.md you should be able to instruct it on how you want your message formatted try...

> hi, would you like to show the part of BASIC_MODEL in the config.yml? > > as i am also trying to use the local model but failed. your ollama...

@BarreiroT can we get this added? at least for the gemini version the response contains a retry delay value: `\"retryDelay\": \"3s\"\ ` >`got status: 429 Too Many Requests. {"error":{"message":"{\n \"error\":...

actually i was looking for the retry mechanism to actually work, not just displaying the value, as its there in the response