OpenLLM icon indicating copy to clipboard operation
OpenLLM copied to clipboard

feat: Apple M1/M2 support through MPS

Open ChristianWeyer opened this issue 1 year ago • 1 comments

Feature request

I want to use OpenLLM with available models to run on Apple M1/M2 processors (GPU support) through MPS.

Today:

openllm start falcon
No GPU available, therefore this command is disabled

Motivation

No response

Other

No response

ChristianWeyer avatar Jun 20 '23 17:06 ChristianWeyer

I'm currently disabling falcon on MPS since I would just run out of memory to try even run the model on Mac

aarnphm avatar Jun 20 '23 22:06 aarnphm

Not sure if this is valid any more. I have since tested a lot with pytorch on MPS, and it is often slower. Will probably investigate mlc vs. gguf for this.

aarnphm avatar Nov 21 '23 09:11 aarnphm