Zexin He
Zexin He
Hi, I'm not sure what you mean by `add models to directory`. To enable training, `root_dirs` should have a directory containing multiple folders e.g. uid1, uid2, uid3, etc. And `uid1`...
## Recent updates until 2024.03.08 - Release [v1.1.0](https://github.com/3DTopia/OpenLRM/releases/tag/v1.1.0) with major code refactoring. - Model [weights](https://huggingface.co/zxhezexin) updated for `Objaverse only` and `Objaverse + MVImgNet`. - Release [blender script](https://github.com/3DTopia/OpenLRM/blob/main/scripts/data/objaverse/blender_script.py) used in rendering...
## Recent updates until 2024.03.13 - Release [v1.1.1](https://github.com/3DTopia/OpenLRM/releases/tag/v1.1.1). - Upload the training code and a sample config.
Hi there, The input and output of OpenLRM are in the format of images. You may need to render your `.obj` files to images first. https://github.com/3DTopia/OpenLRM?tab=readme-ov-file#data-preparation
Hi, `No module named openlrm` means your working dir is not the root dir of your cloned repo. You can run `!realpath .` to see which is the working dir...
Hi, Plz refer to this issue here. https://github.com/3DTopia/OpenLRM/issues/24#issuecomment-2002950208 The newest commit has solved this problem.
Hi, Plz prepare your data like this https://github.com/3DTopia/OpenLRM/issues/26#issuecomment-2011337658 and your meta file like this https://github.com/3DTopia/OpenLRM/issues/33#issuecomment-2031759272.
Hi, Thanks for your interest! We used DINOv1 during our first development to align with the original LRM paper and to avoid introducing new factors. Now that the codebase is...
您好,一张A100着实可能有些少了,您可以观察一下少数据情况下的收敛情况,但估计可能不太乐观。
Hi, Plz follow the data preparation instructions [here](https://github.com/3DTopia/OpenLRM/tree/main?tab=readme-ov-file#data-preparation). You may need to use other scripts and follow the instructions here for distributed rendering https://github.com/allenai/objaverse-rendering. `mathutils` and `bpy` are built-in packages...