TopNotchSushi

Results 2 comments of TopNotchSushi

yes, in the llm lines, for either LLamaCPP or GPT4All, add `, n_threads=(number of threads)`

> Checkout this discussion, and I have been able to run most of the models, which were not being run till now. > > https://huggingface.co/TheBloke/MPT-7B-Instruct-GGML/discussions/2 > > llama-cpp-python==0.1.53 ctransformers==0.2.0 Were...