layout-model-training icon indicating copy to clipboard operation
layout-model-training copied to clipboard

support pre-training or fine-tuning schemes

Open bertsky opened this issue 5 years ago • 18 comments
trafficstars

Thanks for sharing your work, it's awesome!

I am eager to train this on my own materials, but they are comparatively scarce and I don't have the computational capacities to train on the whole PubLayNet from scratch myself.

So I was wondering: What changes would be needed to continue training from your pre-trained models? Or even more elaborately, do you think it would be worthwhile trying to load an existing model, fix most of the weights, and add some additional layers to the FPN for fine-tuning?

bertsky avatar Nov 05 '20 10:11 bertsky

Thank you for your kind words!

And sorry for just seeing your issue. Sure, let me write a tutorial or build some exemplar code for the pre-training. Basically you just need to load any pre-trained weights we've provided, and repeat the training process.

lolipopshock avatar Dec 02 '20 16:12 lolipopshock

Oh, that would be great – thanks in advance!

I expect just initializing with your pre-trained models and training on new data would quickly make the model forget your large and broad initial dataset, because initial gradients will be large. Apart from layer fixation I also thought about reducing learning rate or imposing restrictive gradient clipping. But I guess I'll have to go through these experiments anyway...

bertsky avatar Dec 02 '20 16:12 bertsky

Thanks for sharing your work, it's awesome!

I am eager to train this on my own materials, but they are comparatively scarce and I don't have the computational capacities to train on the whole PubLayNet from scratch myself.

So I was wondering: What changes would be needed to continue training from your pre-trained models? Or even more elaborately, do you think it would be worthwhile trying to load an existing model, fix most of the weights, and add some additional layers to the FPN for fine-tuning?

Same question here! And congratulations on this amazing tool.

Crnagora28 avatar Apr 12 '21 22:04 Crnagora28

Would be great to have such a script/documentation! Thanks a lot!

joshcx avatar May 05 '21 10:05 joshcx

Oh, that would be great – thanks in advance!

I expect just initializing with your pre-trained models and training on new data would quickly make the model forget your large and broad initial dataset, because initial gradients will be large. Apart from layer fixation I also thought about reducing learning rate or imposing restrictive gradient clipping. But I guess I'll have to go through these experiments anyway...

Any updates on your experiment @bertsky? I intend to something similar and would like to learn from your experience.

nasheedyasin avatar May 21 '21 06:05 nasheedyasin

@lolipopshock I am trying to fine tune this model on my own custom data, having different classes than what the model was trained on.

Here is what I have done:

  • Changed the config param in ROI_HEADS>NUM_CLASSES from 5 to 3.

This lead to the following warning:

Skip loading parameter 'roi_heads.box_predictor.cls_score.weight' to the model due to incompatible shapes: (6, 1024) in the checkpoint but (4, 1024) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.box_predictor.cls_score.bias' to the model due to incompatible shapes: (6,) in the checkpoint but (4,) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.box_predictor.bbox_pred.weight' to the model due to incompatible shapes: (20, 1024) in the checkpoint but (12, 1024) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.box_predictor.bbox_pred.bias' to the model due to incompatible shapes: (20,) in the checkpoint but (12,) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.mask_head.predictor.weight' to the model due to incompatible shapes: (5, 256, 1, 1) in the checkpoint but (3, 256, 1, 1) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.mask_head.predictor.bias' to the model due to incompatible shapes: (5,) in the checkpoint but (3,) in the model! You might want to double check if this is expected. Some model parameters or buffers are not found in the checkpoint: roi_heads.box_predictor.bbox_pred.{bias, weight}

Now here is what I expect has happened:

  • The weights for the ROI_HEADS aren't loaded and hence these weights will be randomly initialized.
  • All other weights (of the backbone e.t.c.) are initialized.
  • The model can now be trained on my custom data (can be used for transfer learning).

Please correct me if I am wrong :)

nasheedyasin avatar May 21 '21 11:05 nasheedyasin

It would be really great to have a short tutorial on fine tuning on a custom data-set with custom labels starting from a pretrained model.

natasasdj avatar May 21 '21 15:05 natasasdj

It would be really great to have a short tutorial on fine tuning on a custom data-set with custom labels starting from a pretrained model.

If I'm successful in fine tuning on a custom dataset, will definitely work towards making a tutorial of the same.

nasheedyasin avatar May 21 '21 15:05 nasheedyasin

The plan for updating the repo and creating a dedicated fine-tuning tutorial has been unintentionally delayed - will get back to this project very soon in one or two weeks and release the updates. Please stay tuned :)

lolipopshock avatar May 22 '21 06:05 lolipopshock

The plan for updating the repo and creating a dedicated fine-tuning tutorial has been unintentionally delayed - will get back to this project very soon in one or two weeks and release the updates. Please stay tuned :)

Hi! Any updates about a fine-tuning tutorial? I'm looking forward to it!

VladyslavHerasymiuk avatar Jun 21 '21 17:06 VladyslavHerasymiuk

The plan for updating the repo and creating a dedicated fine-tuning tutorial has been unintentionally delayed - will get back to this project very soon in one or two weeks and release the updates. Please stay tuned :)

Hi! Any updates about a fine-tuning tutorial? I'm looking forward to it!

We recently updated a bunch of stuff to make the repo more flexible, I'll work on creating tutorial as and when I'm free, usually over the weekend.

nasheedyasin avatar Jun 21 '21 18:06 nasheedyasin

Hi all, Here's a draft of the tutorial to fine tune models using this repo.

I will close this issue when it is published and update a link to the published version of the tutorial.

nasheedyasin avatar Jun 29 '21 15:06 nasheedyasin

Hi all, Here's a draft of the tutorial to fine tune models using this repo.

I will close this issue when it is published and update a link to the published version of the tutorial.

It seems the post is still not publicly available?

lolipopshock avatar Jun 29 '21 15:06 lolipopshock

Hi all,

Here's a draft of the tutorial to fine tune models using this repo.

I will close this issue when it is published and update a link to the published version of the tutorial.

It seems the post is still not publicly available?

For now, to access the draft you'll need to be logged in to your medium account, once it's published (hopefully in a maximum of 3 days), you'll be able to publicly access it without logging in.

nasheedyasin avatar Jun 29 '21 15:06 nasheedyasin

Hi all,

Here's a draft of the tutorial to fine tune models using this repo.

I will close this issue when it is published and update a link to the published version of the tutorial.

It seems the post is still not publicly available?

For now, to access the draft you'll need to be logged in to your medium account, once it's published (hopefully in a maximum of 3 days), you'll be able to publicly access it without logging in.

Thanks - just took a quick look and it looks nice! Would you mind if I also include it in the layout-parser's website as a tutorial for model training in the future? We can talk about the details if you join the slack channel - thanks!

lolipopshock avatar Jun 29 '21 16:06 lolipopshock

Would love that, I'll join the channel right away.

nasheedyasin avatar Jun 29 '21 20:06 nasheedyasin

The tutorial is now live on Towards Data Science.

nasheedyasin avatar Jul 01 '21 13:07 nasheedyasin

See #10

lolipopshock avatar Feb 14 '22 22:02 lolipopshock