web-llm
web-llm copied to clipboard
High-performance In-browser LLM Inference Engine
Chrome Version: 125.0.6283.3 OS: ChromeOS GPU: Intel(R) Graphics (ADL GT2) - Intel open-source Mesa driver: Mesa 23.3.0 (git-5cb3f1e4fa) Dawn Backend: Vulkan **What steps will reproduce the problem?** 1. Go to...
If I try to compile my own wasm I have this error. Coming from here #373
GPU Device Error: Uncaught (in promise) TypeError: lib$1.exports.detectGPUDevice is not a function
Hey WebLLM Team! 🌟 I've been diving into integrating WebLLM within an Obsidian plugin and stumbled upon a little hiccup. Running on Windows 11 with an Intel processor (which should...
when prompting to summarize long article or long chat * system: macbook pro m1 * model: llama3
Are you planning to support LLaVA? I see you have this issue open https://github.com/mlc-ai/web-llm/issues/276 Do you also plan to support video-LLaVA?
When I try to load the wasm from local using [http-server](https://www.npmjs.com/package/http-server) I have this error no matter if the mime type is `application/wasm` or `application/octet-stream`. I have used always this...
I found that previously the formatting of generating code block will be displayed as raw format without markdown. Supported the markdown format to ensure the code block and markdown can...
## Overview There have been many great suggestions from the community regarding loading and caching model weights. This tracker issue compiles the suggestions and keeps track of the progress. ##...
I love WebLLM, but I have to admit it's not been easy for me to integrate into my project. This is because the provided examples seem to assume that developers...
:wave: