maozdemir
maozdemir
@Kaszanas probably sometrhing went wrong during the compilation of llama-cpp-python ,can you try uninstalling and installing back?
@johnbrisbin can you use this wizard? https://pytorch.org/get-started/locally/ Also I'll read your comment when I have time, I'm not ignoring it.:)
> @maozdemir Compilation ran successfully, GPU ingest works as intended. This issue is only present when trying to run the privateGPT script. I could try and show you step by...
> First of all, great contribution, was looking out for this and was excited to see someone put it together so quickly. Unfortunately I haven't got it to use my...
> > > @maozdemir Compilation ran successfully, GPU ingest works as intended. This issue is only present when trying to run the privateGPT script. I could try and show you...
@StephenDWright you're welcome, this will help me with writing a better README too :) so thanks for your feedback. the possible cause is your llama-cpp-python was not compiled with CUBLAS....
@StephenDWright alright, that doesn't seem to be the issue. Assuming that you already have CUDA drivers installed, the only thing that comes to my mind is torch `pip3 install -U...
I'd not rely on OCR, any OCR output should be supervised. The rest sounds good, though.
what commands do you use to launch the project? can you paste here?