Results 166 comments of Charlie Ruan

I see. Could you perhaps provide some steps that could reproduce the issue? I tried: open the page, download some of the shards, close it, reopen the page to keep...

@DavidGOrtega Hmm I see. From my understanding, the cache should be transactional (all or nothing), so I am not really sure what is causing the issue here. Would be great...

Closing the issue for now, feel free to open new issues if the problem persists.

Agreed; will make this change soon!

This should be resolved by this PR. Now building WebLLM from source only requires `npm install; npm run build`, without needing to run `./scripts/prep_deps.sh`: - https://github.com/mlc-ai/web-llm/pull/570 Closing this one for...

Thanks for all your work and contribution! A similar solution is included in npm 0.2.36, and the following code snippet in https://jsfiddle.net/ should work with no setups: ```javascript // See...

Thanks for reporting the error. Could you show the [log in console](https://developer.chrome.com/docs/devtools/open)? Not sure if there is more info there. Besides, does this issue occur in all models? Could you...

This is a bit strange. WizardMath has `q4f16_1`, could you also try `Llama-2-7B-q4f16_1`? Besides, how much RAM do you have? My guess is that it is some OOM issue. Looking...

Thanks for the suggestion! We are thinking about adding versioning to the binary name, say `Llama-v0.2.21`, then when updating it we update it to `Llama-v0.2.22` so the outdated one in...