Gabriele Venturi
Gabriele Venturi
Hi @beijingtl, we are working on supporting local LLMs, but overall I'm not 100% satisfied with the average quality. It might fail for slightly more complex tasks.
@pablolerner thanks for reporting. That's probably our key priority at the moment. We'll probably need to split it in several packages, so that anyone can decide which packages to install...
@msdels should have been fixed with the new release, please check it out!
@bennofatius thanks a lot for reporting. In this case it seems it's an hallucination to me. The code generated doesn't seem to make a lot of sense to me. We...
Hi @Marcus-M1999, this might be caused by the fact that the prompt has been optimized for GPT3.5 and google models. Every LLM might need some variation to perform at its...
@Marcus-M1999 yes, that's correct!
What I recommend at the moment is reverting to a previous version (for example 1.4.7 or 1.5.1). We are working hard on fixing this issue and we'll keep you updated...
This is most likely an issue related to the prompt and the generated output, but we are also considering different scenarios.
Thanks a lot to each of you for the contribution! This very bug should have been fixed as of 1.5.7, but I've noticed sometimes GPT-3.5/4 is using for loops for...
Hi @kyrxanthos you can pass a custom file path in the config, custom instructions is more about the process you want to put in place before the return! Let me...