web-llm
web-llm copied to clipboard
High-performance In-browser LLM Inference Engine
I have successfully run on my macOS, but the GPU is using Intel, which is too slow. My macOS is equipped with an 8G AMD graphics card, how can I...
hugging face -> Hugging Face
Can you write a tutorial for local deployment?
On an M1 Macbook Pro with 16GB running OS 13.1
So I have been following the AI boom that's started the past 1-2 months. I am personally very interested in decentralizing AI models and access. My project (https://lumeweb.com) is in...
Thank you very much for this open source project, which allows ordinary computers to use LLM I hope to be able to call web llm through API in my own...
Chrome Version 114.0.5715.0 (Official Build) canary (64-bit) The error is: mlc.ai/:1 No available adapters. llm_chat.js:421 Error: Cannot find adapter that matches the request at Object. (tvmjs.bundle.js:587:24) at Generator.next () at...
Sorry I am submitting this here, I am just wondering how can I run llama using chroma-beta in headless mode?
While running on the candy chrome I got those errors: (firstly the `Init error` and on the second request the rest) ``` [System Initalize] Initialize GPU device: WebGPU - intel...