Detectron.pytorch icon indicating copy to clipboard operation
Detectron.pytorch copied to clipboard

cannot I finetune from coco-pretrained models

Open CoinCheung opened this issue 7 years ago • 4 comments

Expected results

In the caffe2 implementation, there is a field TRAIN.WEIGHTS which allows us to use models pretrained on datasets such as coco. Can I merely use pretrained resnet backbones but not the whole pretrained model ?

Actual results

I found no where that I could load a whole pretrained faster/mask-rcnn model.

CoinCheung avatar Aug 13 '18 12:08 CoinCheung

Check out issue #76 .

flauted avatar Sep 21 '18 14:09 flauted

I should be more precise:

  • Go to the Detectron model zoo. There is a chart of "End-to-End Faster & Mask R-CNN Baselines." This link should take you there.
  • The left-hand column of backbone names corresponds fairly obviously with the config files. For example, on the End-to-End chart, backbone R-101-FPN Mask 2x corresponds to configs/baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.
  • On the right hand side of the chart, click on the model download link. The filename defaults to something like model_final.pkl. Not very useful - I recommend renaming it (to , i.e. e2e_mask_rcnn_R-101-FPN_2x.pkl) and putting it in data/pretrained_model/
  • (I'm trying to eval. But, judging from the structure of this repo, fine-tuning is similar.) You'll now run your command with --cfg [CONFIG] --load_detectron [MATCHING DETECTRON PKL] and it should JustWork

flauted avatar Sep 21 '18 14:09 flauted

The steps given are great if one hasn't pretrained a model before, however I have a problem where I have finetuned a model 2 times using "--load_ckpt" and now I need to replace the softmax layer with a new layer of a different size. However, doing as described in other issues by setting the weights to "None" or "True" isn't the problem. I can't load a checkpoint with "--load_detectron" because the checkpoints "blobs" are integers for some reason. Any ideas of whats going on?

B2Gdevs avatar Oct 23 '18 18:10 B2Gdevs

Check that you have a .pkl file instead of .pth. Btw how did you manage to fine tuned a model 2 times without running into the blob issue? (Model a fine tune model b, get model b and fine tune to model c. ) How did you convert the pre-trained weights between model b and c as you did with model a to b.

tebandesade avatar Nov 07 '18 07:11 tebandesade