MoGe
MoGe copied to clipboard
Optimize model device selection, add CPU inference support, and refactor geometric functions to adapt to device types
I run the app in github codespace, the gradio app can work
Hi, thanks a lot for your contribution! However, the current PyTorch implementation already supports native CPU inference. Could you share the specific reasons or advantages you see for re-implementing part of the inference in NumPy?