Christian Weyer
Christian Weyer
Heavily needed! :-)
I do get this error running on an Apple M1 with PyTorch compiled against MPS and running the script with `python run_localGPT.py --device_type=mps`
> I do get this error running on an Apple M1 with PyTorch compiled against MPS and running the script with `python run_localGPT.py --device_type=mps` Should I create a separate issue...
Some updates and partial success on my M1: "cuda" is hardcoded in https://github.com/PromtEngineer/localGPT/blob/979f912d07d40704d105c92b4f20a6a5b8df0c6a/run_localGPT.py#L63 This probably should take the `device_type` as an input. @PromtEngineer I changed this locally and it starts....
> @ChristianWeyer this seems to be a bug, thanks for highlighting it. I am not sure if auto_gptq supports M1/M2. Will need to test that. Seems it does not: https://github.com/PanQiWei/AutoGPTQ/issues/133#issuecomment-1575002893...
BTW @PromtEngineer: the current code checks for CUDA explicitly for full models, which makes it unusable for MPS: https://github.com/PromtEngineer/localGPT/blob/main/run_localGPT.py#L68
> @ChristianWeyer I finally got a M2 and just tested it, that is the case. Need to figure out if there is another way. Do you already have an idea...
Is there an official and public version of the roadmap @voidking ?
Any ideas @sonichi (or @tidymonkey81) how to help @matsuobasho ?
Highly appreciated to have Metal support!