localGPT
localGPT copied to clipboard
M1/M2 MacOS users that has not got NVIDIA
Please read the Read.me file. You need to implement changes mentioned on ingest.py, run_localGPT.py, and instructor.py.
instructor.py is inside the InstructorEmbeddings. The "instructor.py" is probably embeded similar to this: file_path = "/System/Volumes/Data/Users/USERNAME/anaconda3/envs/LocalGPT/lib/python3.10/site-packages/InstructorEmbedding/instructor.py”
Your GitHub project is a great example of how to use modern development tools to create efficient and scalable software.
Your torch mps backend installation instructions are not actual. I was able to succes the torch.backends.mps.is_available()
check with default pip install -r requirements.txt
installation path on my Mac Pro M1 Max. It installed torch==2.0.1
which includes MPS support by default.
can you please change the default device_type back to cuda, that way it will be align with the Readme. Thanks,
I am not sure if writing default=‘Cuda’ would work. I will test on M2 first.
per the owner request, default is changed back to "cuda". I run the code on M2 even with default='cuda'. it run somoothly. However, it was typing run on =cuda. if this causes issue, I recommend you to change it to default="mps".
@phdykd thank you for updating the PR.
Where do I find the instructor.py
file?
please read the read.me in the fork I created. This is for M2/M1 MacOS users..
https://github.com/phdykd/localGPT/blob/main/README.md
@PromtEngineer, I detailed README file on how M1M2 MacOS users can find the instructor.py. You can find the new changes in my fork here, verify and then merge...
https://github.com/phdykd/localGPT/blob/main/README.md