Engel Nyst
Engel Nyst
@AIhaisi Did you try the solution shared in the comment above? https://github.com/OpenDevin/OpenDevin/issues/1226#issuecomment-2066760909
Thanks, @PierrunoYT, yes, we need it. I'm working on this and will have a PR soon (tomorrow probably). No button, but I'll propose both some automatic logging (errors), and the...
> Looks like LLM logs are stored in `./logs/llm/$DATE/`--if you're able to look through those that'd be super helpful You also need to have `DEBUG=1` in .env or otherwise enabled...
> I'm trying to figure out how to set up logging so we can see the full prompt/response again--that would be super helpful. That must be because it wasn't working...
Yes, please use the make commands as listed in the README.
According to [this doc](https://github.com/OpenDevin/OpenDevin/blob/main/docs/documentation/LOCAL_LLM_GUIDE.md#3-start-opendevin), the model name needs to be the full model name, as seen in `ollama list`. Can you please try that?
Can you please paste the errors now? I'm not sure where the problem is if the settings are taken into account, but I wonder first if they were applied, or...
At least, I _think_ it's (much) less expensive, can't find it atm on openai website. 😅
> My only hesitation is that gpt-4 is really noticeably better at generating code. It will give a better out-of-the-box experience with OpenDevin. And at following instructions! It's extremely useful...
Can you try this branch to see if it helps? https://github.com/OpenDevin/OpenDevin/pull/890 Although, some hit this issue, some didn't. Perhaps start separately backend and frontend, giving backend a bit of time...