open-interpreter icon indicating copy to clipboard operation
open-interpreter copied to clipboard

Intel Arc GPU support

Open itlackey opened this issue 1 year ago • 1 comments

Is your feature request related to a problem? Please describe.

Yes, I would like to run a local model using my Intel Arc GPU. I have successfully ran local models using FastChat and would like to do the same in Open Interpreter. Currently I do not see an option or indication that it supports Intel GPUs.

Describe the solution you'd like

It seems like it should work if llama.cpp is compiled with CLBLAST but not sure if the code actually uses the XPU. It seems to use LiteLLM to interact with the model. It may require additional changes to the code but I'm not sure.

Describe alternatives you've considered

I have attempted to point open interpreter to a local OpenAI API host via FastChat that is running on my GPU. I receive various errors depending on what settings/models I try. From what I can tell the software does not currently support a locally host model via FastChat either, or I have not figured out the correct combination of settings and model.

Additional context

No response

itlackey avatar Sep 28 '23 07:09 itlackey

@ericrallen This should be solved with LM Studio, I believe it has OpenCL support. This question was asked when we used Ooba, so it is not relevant anymore

Notnaton avatar Nov 19 '23 11:11 Notnaton