private-gpt
private-gpt copied to clipboard
What are the requirements on hardware for run this project?
Can you help giving more information about the requirements in hardware to test this project
particular what I need to in terms of hardware:
Instructions for ingesting your own dataset
I've tried it on a VPS, Google Colab and a M1 Pro Mac and the best performance so far was the Mac.
I think the Mac has the best support, because LLama.cpp needs blast support, but the VPS/Google Colab with a GPU have none as far as I can see.
In terms of RAM: At least 10Gb free RAM on your System, both 4GB files are loaded into RAM and the calulations for answering take up space too.
this should have been indicated on read me, I wasted effort only to find out that I can't run it in my potato laptop hahaha
No there should be a maximum memory limit to run even in your potatoes laptop, by stringing processes into streams rather than bulk.
It use to run fine on WSL2 with a 3060 8gb of vram. But since all the pull requests and feature additions, now I'm just trying to get it to ingest and infer. Neither is working for me.