Wei-Ming Chen

Results 54 comments of Wei-Ming Chen

Close due to inactivity. Feel free to reopen.

Close due to inactivity. Feel free to reopen.

Hi @ellial, Our quantization implementation basically follows tensorflow lite micro, so we evaluate the accuracy on servers with TF Lite runtime (example: https://github.com/mit-han-lab/mcunet/blob/be404ea0dbb7402783e1c825425ac257ed35c5fc/eval_tflite.py). We are also working on supporting different...

Close due to inactivity. Feel free to reopen.

Close due to inactivity. Feel free to reopen.

Hi, we have released the patch-based inference feature. Feel free to check out [the tutorial](https://github.com/mit-han-lab/tinyengine/blob/master/examples/vww_patchbased.py) and [the code generation script ](https://github.com/mit-han-lab/tinyengine/blob/master/examples/vww_patchbased.py)for more details.

Hi @EricWu-ZL, Thanks for reaching out! I have merged the fix https://github.com/mit-han-lab/tinyengine/pull/36. Please update the codebase and try again. Feel free to let me know if there is any other...

Close this issue as the patch inference is supported. Feel free to reopen if needed.

Close due to inactivity. Feel free to reopen.