private-gpt
private-gpt copied to clipboard
Number of Thread used for pirvategpt.py
What is the thread usage when running ggml? Can i add number of thread to use when running privategpt.py?
yes, in the llm lines, for either LLamaCPP or GPT4All, add , n_threads=(number of threads)
Thank you. managed to work.
Seems like this now only works for llamacpp. I tried with 64 threads on a big server, and it worked fine. But with gpt4all it is always limited to 4 threads. And never goes over 400% usage.
n_threads=(number of threads)
same here. llm.n_threads = 1 or 8 or else makes no difference to the thread count.
Update to the latest version of langchain (0.193) and the n_threads works for GPT4all.