Mr.Userbox
Mr.Userbox
i just found the repo few days ago and i havent try it yet but im very exited to give me time to test it out. I also have AMD...
We should make a discord group for ZLUDA and llamacpp
sup guys, some buddy already could run llamacpp with zluda? can share the steps please i would like to test it
one of the contributors of RocBLAS told me to try this into docker container. He claim that it make avaible to use all AMD gpus in your pc, old ones,...
this project is open source, we have the code, we can code and find us time to improve and keep developing if we really interested. @Baigle if you want to...
hello guys, fist of all thanks for all the hard work you doing to make rx5700 work, im just a hobbist and not even close to be near your league...
@smirgol one of the contributors of rocblas says that we can compile llamacpp with hipblas and mix old and new gpus https://github.com/ROCm/rocBLAS/pull/1251#issuecomment-1936685074
lol i just notice its cgmb, sup bro i just dm you in the other repo chat lol
@danielhanchen does this will help bro? https://github.com/ROCm/triton/tree/triton-mlir what would be the next step?
@jamesxu2 sorry for the delay, i just saw the response. I have turn off my hobby computer that i use for testing AI stuff, going to turn it on this...