Alexander Dibrov
Alexander Dibrov
@jmtatsch - To answer your questions: **1.** `mlock` was not working properly for me anyway, and I believe it causes crashes on some machines. It is disabled by default in...
> Can we change the CTX_MAX and max_tokens only for Llama? Also, there are bunch of changes to the prompts, which I'd like to split into a separate PR, if...
@francip - I've reverted the prompt changes, so hopefully this PR meets your needs. I'll submit prompt changes for separate consideration.
Prompt changes are now in a [separate PR](https://github.com/yoheinakajima/babyagi/pull/289) as requested. Please let me know if there are any further issues.
Thank you, @DYSIM - this seems to resolve the issue on my end. I've taken the liberty of turning your suggestion into a pull request.
@alexl83 - I've also been encountering this issue, irrespective of whether I used code from @DYSIM or alternative embedding functions in chroma. I therefore don't think it's unique to my...
Thank you, @alexl83 - I'll have a look at that next. In the meantime, I seem to have gotten the script to run indefinitely by doing some rather nasty string...
By way of an update, I've had to do a considerable amount of editing in order to make the existing script play nicely with llama. I hope that I did...
I've added [a pull request](https://github.com/yoheinakajima/babyagi/pull/265) with a refined version of the file from my last comment. It allows the script to operate with llama-based models to the extent these models...