web-llm
web-llm copied to clipboard
High-performance In-browser LLM Inference Engine
It would be great to have Gemma 7B support on top of Gemma 2B.
Hi community, we recently updated our models' `mlc-chat-config.json` on hugging face to use the latest conversation template. The goal is to make templates more lightweight, hence not requiring an npm...
Cannot find module 'tvmjs' or its corresponding type declarations EVEN has ‘npm install tvmjs’ done
(gh_web-llm) amd00@asus00:~/llm_dev/web-llm$ npm run build > @mlc-ai/[email protected] build > rollup -c src/index.ts → lib/index.js... [!] (plugin rpt2) Error: src/chat_module.ts:1:24 - error TS2307: Cannot find module 'tvmjs' or its corresponding type...
## Overview The goal of this task is to implement APIs that are [OpenAI API](https://platform.openai.com/docs/api-reference) compatible. Existing APIs like `generate()` will still be kept. Essentially we want JSON-in and JSON-out,...

how can I make get-started run in nodejs?
Recently, I integrated webllm into my web project, and the effect of gemma-2b is pretty good. Thanks for your work, everything runs very well. I am trying to add more...
If the download fails the model can not recover and will fail always unless the cache is cleaned. related to #280 related to #284
