[Device] Catch WebGPU OOM error
Prior to this PR, when users createEngine() or call reload() with a model that is too large for the device, likely the device would keep generating, ignoring OOM issue and correctness. See https://github.com/mlc-ai/web-llm/issues/356 and https://github.com/mlc-ai/web-llm/issues/209.
This PR catches such error with device.lost.then(), depending on tvmjs to call device.destroy() upon detecting error in createBuffer() via https://github.com/apache/tvm/pull/17005.
We have only observed createBuffer() errors and hence will only process such kind of errors for now. Besides, since most OOM errors occur in reload(), we make the error handling synchronous despite using .then() by throwing the error at the end of reload() if there is one.
Example of trying to allocate a KV cache with 900k context length (should be similar for trying to load a model that is too large):
Marked as a draft for now as it depends on https://github.com/apache/tvm/pull/17005