OpenLRM icon indicating copy to clipboard operation
OpenLRM copied to clipboard

Fine tuning on a dataset

Open laiba12345 opened this issue 1 year ago • 8 comments

  1. Do we need to add 'views' folder as root path in training config (views folder has rgba and pose folders) or do we need to add models to directory as well?
  2. How to configure model for fine tuning?

laiba12345 avatar Mar 18 '24 18:03 laiba12345

Hi,

I'm not sure what you mean by add models to directory. To enable training, root_dirs should have a directory containing multiple folders e.g. uid1, uid2, uid3, etc. And uid1 should contain rgba, pose, and intrinsics.npy.

  • root_dir
    • uid1
      • rgba
      • pose
      • intrinsics.npy
    • uid2
      • ....

For finetuning, plz try to use this method. https://github.com/3DTopia/OpenLRM/blob/c2260e0f2a2f16c86f20c3f844d391692f2ae6ea/openlrm/runners/train/base_trainer.py#L206

ZexinHe avatar Mar 21 '24 06:03 ZexinHe

Hi @ZexinHe , thank you for the advise. I've prepared images through Objaverse Rendering as you can see below : image However, not sure how to prepare those pose and intrinsics.npy. I'm very new to AI, so please could you specify more details for training?

Also, I have no idea how to prepare thos meta_path's json files in train-sample.yaml : image

It would be greate if you can help me. Thanks in advance:)

hayoung-jeremy avatar Apr 16 '24 05:04 hayoung-jeremy

I'm not so sure but following the code, in openlrm/datasets/base.py in line 46 expects a file path to a json with the uuids.

So i would try to create a json file with ["uid1","uid2",...]. #33

And reference by path to that json with the path

meta_path: train: "your_path_to_json"

juanfraherrero avatar Apr 16 '24 17:04 juanfraherrero

Thank you for your kind reply, @juanfraherrero ! I'll try it!

hayoung-jeremy avatar Apr 17 '24 01:04 hayoung-jeremy

Hi @juanfraherrero , is it possible to finetune the pretrained LRM models with my custom small dataset? I'm currently trying overfitting with my 100 pairs of high quality glb files, but it is zero-based training. So I wonder if it is possible to use pretrained models mentioned on the README.md such as openlrm-mix-large-1.1 etc. I don't see any guideline for finetuning, so asking your help. Thank you in advance!

hayoung-jeremy avatar Apr 24 '24 01:04 hayoung-jeremy

Hi @hayoung-jeremy , I tried to train some epochs, but i don't have enough vram (always cuda-out-of-memory) to even load the model. In this issue #2 he said he used 32 A100 to train the model for 2 days. Except you have a good GPU it will be imposible.

About the guideline, I followed the instructions in the readme, first prepare your data, then train. aBut as i said i can´t load the model so i don't know if I did it right.

Sorry! Good Luck.

juanfraherrero avatar Apr 24 '24 11:04 juanfraherrero

Hi @juanfraherrero , thank you for your kind reply!

I've tried fine-tuning using the base model provided by OpenLRM. If you're interested, please take a look at this

hayoung-jeremy avatar Apr 25 '24 08:04 hayoung-jeremy

anyone can give us some insights of the time taken to prepare the training data? thanks

ChendiDotLin avatar Sep 26 '24 02:09 ChendiDotLin