distributed-llama
distributed-llama copied to clipboard
Will this awesome proj consider supporting GPU acceleration?
A very impressive job!
But it doesn't seem to support the use of GPU. Does the author consider developing code that supports GPU acceleration?
Any suggestions to migrate this project to CUDA/HIP acceleration?
Thanks for any help!
Hello @galenyu! Yes, GPU acceleration is planned.
hello, thanks for your job. when will the gpu acceleration version be released?
Hello. I am currently trying dllama. Also I have a supercomputer, 6 nodes, 96Cores, 768Go RAM, 6 PNY Nvidia RTX 4000 Ada Generation, and I need GPU support.
So, no promise, because I have other related project on the fire, but what is missing, please ? Can you do me a summary of your advancement, ... etc.
Thanks in advance. Best Regards. Benjamin.
I can run a Llama 3.1 70B Instruct Q40
@b4rtaz im bumping this as im interested to know what the status of this is?
Thanks.