Nestor Qin
Nestor Qin
I've met the same problem. This issues need to be further investigated.
> i just changed the next.config.js, problem solved: > > ```ts > /** @type {import('next').NextConfig} */ > const nextConfig = { > reactStrictMode: true, > > > webpack: (config, {...
Consider trying https://github.com/mlc-ai/web-llm-chat and create issues to the main web-llm repo for new model support requests.
I've developed a web application integrating Web-LLM with [NextChat (ChatGPT Next Web)](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web). You can access the live demo at https://chat.neet.coffee/ and here's a screenshot:  Please...
The chat webapp https://chat.neet.coffee/ has been updated to: - Resolve the repeated model initialization issue - Support service-worker - Support streaming - Support more customization As the next step, I...
Close issue and move tracking to https://github.com/mlc-ai/web-llm-chat
> Perhaps of interest: Secret Llama recently created what you describe as well: > > https://secretllama.com/ > > https://github.com/abi/secret-llama > > https://www.reddit.com/r/LocalLLaMA/comments/1cjjxc6/i_built_a_free_inbrowser_llm_chatbot_powered_by/ > This is interesting and thank you for...
WebGPU is not fully supported by all the browsers yet. Please check [WebGPU compatibility table](https://caniuse.com/webgpu) or use https://webgpureport.org/ to check whether the browser you used supports WebGPU.
@louis030195 For the slow loading on Vercel, I'm not sure about the reason and I personally didn't meet the issue. We have WebLLM Chat deployed both on GitHub pages (https://chat.webllm.ai)...