private-gpt icon indicating copy to clipboard operation
private-gpt copied to clipboard

how to utilize GPU in Windows?

Open ONLY-yours opened this issue 2 years ago • 8 comments

when i was runing privateGPT in my windows, my devices gpu was not used? you can see the memory was too high but gpu is not used

my nvidia-smi is that, looks cuda is also work? so whats the problem? Is this normal in the project?

image

image

ONLY-yours avatar May 14 '23 13:05 ONLY-yours

I don't think this repo make use of GPU, only CPU.

bmaltais avatar May 14 '23 15:05 bmaltais

@ONLY-yours GPT4All which this repo depends on says no gpu is required to run this LLM. the whole point of it seems it doesn't use gpu at all

hikalucrezia avatar May 14 '23 16:05 hikalucrezia

@ONLY-yours GPT4All which this repo depends on says no gpu is required to run this LLM. the whole point of it seems it doesn't use gpu at all

@katojunichi893

seems like that, only use ram cost so hight, my 32G only can run one topic, can this project have a var in .env ? ,such as useCuda, than we can change this params to Open it.

I'm not a deep learning developer, so I don't know the details here, does it possible?

ONLY-yours avatar May 15 '23 01:05 ONLY-yours

This would be a nice feature to add since prompts on CPU take quite a bit of time to execute and return answer.

HoseinHashemi avatar May 15 '23 04:05 HoseinHashemi

GPU would be very useful.

ram-sh avatar May 15 '23 06:05 ram-sh

GPT4ALL does have CUDA option. Is there a way to enable that?

CraftCanna avatar May 16 '23 01:05 CraftCanna

GPT4ALL does have CUDA option. Is there a way to enable that?

        case "LlamaCpp":
            llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, callbacks=callbacks, verbose=False)
        case "GPT4All":
            llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=False)
        case _default:
            print(null)
            exit;

Looks like if LangChain Api support cuda, then will be easy to use

ONLY-yours avatar May 16 '23 02:05 ONLY-yours

Figured this out! I explained here in #217

shondle avatar May 22 '23 14:05 shondle