Tianqi Chen
Tianqi Chen
Yes, i think there are ppl successfully running on AMD GPUs, https://www.reddit.com/r/LocalLLaMA/comments/132igcy/project_mlc_llm_universal_llm_deployment_with_gpu/ please try to upgrade to the latest driver and see if it works
should be resolved in latest instructions https://mlc.ai/mlc-llm/docs/
We just added a new updates #14 which should ship to conda by now, you can type `/stats` after a conversation to get the measured speed
The issue should be fixed by now, see also https://github.com/mlc-ai/mlc-llm/issues/149
feel free to send a PR
closed in favor of #113 We would like to thanks @tirthasheshpatel for your effort on bringing in this change onward
closed in favor of #113 We would like to thanks @tirthasheshpatel for your effort on bringing in this change onward
Sorry, I thought it was refering to the AoS approach when skiming through, feel free to reopen
Currently this is not on the roadmap, mainly because training API was broader and harder to expose, do you have anything particular in mind about what was wanted?
Looks like you are using x86 conda despite being on M1. You can perhaps try to use a conda that support arm64 natively for example, miniforge should come with M1...