mlc-llm
mlc-llm copied to clipboard
Dose mlc-llm support parallelism like multi-gpu, multi-node ?
Dose mlc-llm support parallelism like multi-gpu, multi-node ?
Not yet, distributed inference is indeed an interesting topic that is not supported in TVM yet.
Not yet, distributed inference is indeed an interesting topic that is not supported in TVM yet.
Ok thanks
At this moment, this project focuses on single consumer-class GPU, making it possible for everyone to run on their own laptops and phones. We will bring in distributed inference later
This is now supported