🚀 [Feature]: Add support for Intel ARC GPUs A750 and A770 (If Possible)
Feature Description
Much like having support for nVidia GPUs, I would like support for Intel ARC GPUs and they also could be used with the docker container.
Additional Context (optional)
My current implementation of this container is as follows:
Environment:
Ryzen 5900x VMWare ESXI 8.0 U2 VM - Ubuntu 22.04 Server - 4 CPU Cores assigned Docker Intel ARC A750 - 8GB VRAM
VMWare ESXI allows the Intel ARC GPUs to be passed through to the VM and they will work natively just like a bare metal machine. If there was some way to add GPU acceleration to these chat models it would be great. As you can tell, I'm very new to this, but with 4 CPU cores it does peg them when you ask simple questions. RAM usage is not a concern as I have 128GB available.
Thanks! - I real there may be technical limitations here as well.
Checklist:
- [X] I have checked for existing issues that describe my suggestion prior to opening this one.
- [X] I understand that improperly formatted feature requests may be closed without explanation.
100% would love to see more AI use the arc cards.
Do you have an example of using those gpu in docker? I domt know if llama-cpp-python even supports it