Akaash Parthasarathy
Results
3
comments of
Akaash Parthasarathy
Also running into the same issue
Hi @lonnietc, there's currently no plan to support distributed inference across multiple clients. WebLLM is intended to run in your browser and utilize the WebGPU abstraction provided by the browser.
Hi, this should be fixed in version 0.2.80. If you checkout to the TVM and MLC-LLM commits listed in https://github.com/mlc-ai/web-llm/pull/747, you should be able to compile without issues. Please ping...