SH1436
SH1436
Identical issue to what GLM9 describes.
> have you tried running with the hide source arg?: > > `python privateGPT.py --hide-source` > > that cleaned things up for me Thanks, Sean that worked very well :)...
I could try to replicate your issue and explore possible solutions. How did you implement the use of GPU?
Thanks for your contribution Jason - greatly appreciated. In my case line 36 reads: llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=False) So, I added the **n_threads=12** parameter (12 physical and...
Langchain version was 0.0.177 so updated to the latest repo and in the process got langchain v 0.197. Ingest.py utilized 100% CPU but queries were still capped at 20% (6...