private-gpt
private-gpt copied to clipboard
how to utilize GPU in Windows?
when i was runing privateGPT in my windows, my devices gpu was not used? you can see the memory was too high but gpu is not used
my nvidia-smi is that, looks cuda is also work? so whats the problem? Is this normal in the project?
I don't think this repo make use of GPU, only CPU.
@ONLY-yours GPT4All which this repo depends on says no gpu is required to run this LLM. the whole point of it seems it doesn't use gpu at all
@ONLY-yours GPT4All which this repo depends on says no gpu is required to run this LLM. the whole point of it seems it doesn't use gpu at all
@katojunichi893
seems like that, only use ram cost so hight, my 32G only can run one topic, can this project have a var in .env ? ,such as useCuda, than we can change this params to Open it.
I'm not a deep learning developer, so I don't know the details here, does it possible?
This would be a nice feature to add since prompts on CPU take quite a bit of time to execute and return answer.
GPU would be very useful.
GPT4ALL does have CUDA option. Is there a way to enable that?
GPT4ALL does have CUDA option. Is there a way to enable that?
case "LlamaCpp":
llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, callbacks=callbacks, verbose=False)
case "GPT4All":
llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=False)
case _default:
print(null)
exit;
Looks like if LangChain Api support cuda, then will be easy to use
Figured this out! I explained here in #217