Results 166 comments of Charlie Ruan

Hi @beaufortfrancois, the demo now has three models (ones with `-1k` suffix) that should be able to run with 128MB: https://webllm.mlc.ai/

@beaufortfrancois I looked into it but had trouble replicating the OOM webGPU error on my device (a Macbook)... Could you help me triage a bit? Specifically, do you see this...

Hi @beaufortfrancois, thanks for the confirmation; I just got my hands on a Samsung Galaxy S23 to try to reproduce and hopefully catch the OOM error (after concluding that non-mobile...

Ahh thank you! Didn't know Android also has Canary. Was able to crash as well; will let you know how it goes

This error should be addressed in npm `0.2.36` -- the `next-simple-chat` example should work out of the box. For details see https://github.com/mlc-ai/web-llm/pull/397.

@beaufortfrancois Unfortunately not yet... Sorry for the delay, have been caught up with various other things. But this is still something in my mind and I'll see if I can...

@beaufortfrancois Got Android development workflow going; and was able to crash. Tried the `device.pushErrorScope()` and it's not able to catch the error for me before it crashes; at least on...

Hi @tlopex, I have the impression that you are on a Windows machine; have you run into this issue before?

@tlopex Thanks for sharing! @nico-martin I guess the main reason `prep_deps.sh` fails is the `make` under `tvm-unity/web` fails, and I am still confused about why that fails on your device;...