wafflecomposite
wafflecomposite
Same issue on VS 16.6.0
Just stumbled across this. For anyone struggling with this, try adding slash to the url (ex. http://192.168.1.10:8080 -> http://192.168.1.10:8080/) after login attempt that was redirected back to login form. Then...
Some kind of race condition bug? Seems to be fixable by explicitly ``` import langchain langchain.verbose = False ``` before trying to initialize llm.
My `langchain.verbose = False` "solution" is no longer relevant since https://github.com/langchain-ai/langchain/pull/11311 @fabilix check out a suggestions on this issue https://github.com/langchain-ai/langchain/issues/9854 If that doesn't help, add a comment there too. Apparently...
Until I make some updates, check out this fork https://github.com/sebaxzero/LangChain_PDFChat_Oobabooga I haven't tried it myself, but looks like it should be capable to utilize GPU.
It turns out that the llama was updated, and stable-vicuna images was re-quantised. I can't check right now if it's still working, but I have fixed versions in this repo's...
There is either something wrong with latest llama-cpp-python or it wasn't updated with latest llama.cpp binary yet. I was able to make it work by manually replacing llama.dll inside llama-cpp-python...
Honestly I have no idea at this point. `llama-cpp-python` keeps being randomly problematic for people. I'll try to clean install it somewhere on this week. Probably gonna update the requirements...
Please, report @larasatistevany for spamming. https://support.github.com/contact/report-abuse?category=report-abuse&report=larasatistevany -> I want to report abusive content or behavior. -> I want to report SPAM, a user that is disrupting me or my organization's...
Seems like the main problem is the exceeded context length. First, try to edit those lines in app.py: Line 59: try lower values for `chunk_size` and `chunk_overlap`. Like 800 and...