Results 166 comments of Charlie Ruan

Ahh yes, there is a DEBUG mode here: https://github.com/mlc-ai/web-llm/issues/519#issuecomment-2263648799 Any log that may relate to the crash would be helpful, thanks!

Ah yes! There is a `logLevel` option in EngineConfig. You can set it to `INFO` like here https://github.com/mlc-ai/web-llm/blob/main/examples/simple-chat-ts/src/simple_chat.ts#L345

I see... thanks for the info!

There are various issues similar to this on mobile devices, probably something related to WebGPU on Android Chromes. I don't have something on top of my mind. Not sure if...

Quick question, are you using WebWorker, ServiceWorker, or the plain MLCEngine? For ServiceWorker, my understanding is that this PR has fixed this: https://github.com/mlc-ai/web-llm/pull/471

This seems to be an issue where, the web worker is terminated due to the phone going standby, but your frontend logic's states are still preserved, hence directly sending a...

This should be added to npm 0.2.56. Let me know if the issue is fixed!

Thanks for the issue! If I understand your request correctly this indeed should work. e.g. in chat.webllm.ai, you can see the following response "loading model from cache"

Ah you're right. Likely a bug in https://github.com/apache/tvm/blob/8a914e58925557741aca6d7453e5d94004254079/web/src/runtime.ts#L1316

This example should help: https://github.com/mlc-ai/web-llm/tree/main/examples/simple-chat-upload