web-llm icon indicating copy to clipboard operation
web-llm copied to clipboard

High-performance In-browser LLM Inference Engine

Results 295 web-llm issues
Sort by recently updated
recently updated
newest added

It would be great to have Gemma 7B support on top of Gemma 2B.

Hi community, we recently updated our models' `mlc-chat-config.json` on hugging face to use the latest conversation template. The goal is to make templates more lightweight, hence not requiring an npm...

(gh_web-llm) amd00@asus00:~/llm_dev/web-llm$ npm run build > @mlc-ai/[email protected] build > rollup -c src/index.ts → lib/index.js... [!] (plugin rpt2) Error: src/chat_module.ts:1:24 - error TS2307: Cannot find module 'tvmjs' or its corresponding type...

## Overview The goal of this task is to implement APIs that are [OpenAI API](https://platform.openai.com/docs/api-reference) compatible. Existing APIs like `generate()` will still be kept. Essentially we want JSON-in and JSON-out,...

status: tracking

![image](https://github.com/mlc-ai/web-llm/assets/73160333/f6d45395-a080-41e1-9780-5dfdafa2ca97)

how can I make get-started run in nodejs?

Recently, I integrated webllm into my web project, and the effect of gemma-2b is pretty good. Thanks for your work, everything runs very well. I am trying to add more...

If the download fails the model can not recover and will fail always unless the cache is cleaned. related to #280 related to #284