Recurrent-Attention-CNN icon indicating copy to clipboard operation
Recurrent-Attention-CNN copied to clipboard

Some questions about training and testing

Open YiLiangNie opened this issue 7 years ago • 20 comments

when will you open the solver.prototxt file for training and the test when the synset.txt file for cub-200 to test the model?

YiLiangNie avatar Sep 12 '17 08:09 YiLiangNie

@YiLiangNie Hi,did you train the model?

super-wcg avatar Oct 16 '17 11:10 super-wcg

@super-wcg I froze the apn nets, randomly initialized 4 classification layers for 3 scales and a fusion scale, then fine-tune the model based on given one with 1e-4 learning rate, while i only got 81.2% on CUB. If i froze all the apn and conv layers, only trained the classification layers, i got 83.5% not 85% as in paper. What about you?

chenbinghui1 avatar Oct 24 '17 01:10 chenbinghui1

@chenbinghui1 How to do this? I need your help to run the project.

  1. Can this project run in Linux?
  2. I have download the dataset "CUB_200_2011". But I don't know what should I do next? If you can write a details doc, Which is much better. My e-mail address is [email protected] I really need you help. Thank you very much!

chenfeima avatar Nov 02 '17 03:11 chenfeima

@chenfeima Here is what I did, in order to train the network. If I get the paper right, the training consists of three steps. I skipped the initialization with the vgg weights and the reinforcement learning part, because reinforcement learning algorithms are not part of caffe. So I created three different train_val.prototxt files. I used the first one to train the scaling subnets and freezed all other layers. The second one is used to train the attention proposal networks with a ranking loss to convergence. A sample implementation of this layer can be found here: https://github.com/wanji/caffe-sl/blob/master/src/caffe/layers/pairwise_ranking_loss_layer.cpp In this second stage all scaling layers are freezed. In the final training stage all layers are freezed and the final output layers of the network are trained, which combine the output of the different scalings. I made a gist, with the training prototxt files belonging to this three training steps. You may use it as a starting point. https://gist.github.com/jens25/6b0ea1143599fb99bd499a08dd5c072c

jens25 avatar Nov 02 '17 09:11 jens25

@jens25 I have some questions. (1)What's your final results? Does it close to 85%? (2)As shown in your prototxt, the final classifier for each scale is a 100-class classifier? While in cub there are 200 classes.(3)I think directly fine-tune on the given model with freezing all the layers except for the classifier layers, i.e. your stage-3, the result should be close to 85%, while in fact it only has 83%.

chenbinghui1 avatar Nov 02 '17 11:11 chenbinghui1

@chenbinghui1 I don't have any results. I just created the prototxt files, in order to train the network on a custom dataset. I didn't evaluated it on the bird dataset now. Maybe a direct finetuning of the model will give you better results, than this approach.

jens25 avatar Nov 02 '17 12:11 jens25

@jens25 I download the dataset "CUB_200_2011", but I can not Transform it to lmdb. Can you give a script to transform?

chenfeima avatar Nov 02 '17 12:11 chenfeima

@chenbinghui1 I run the test net, also get the result 83%. Do you know why? Have you got the 85%?

chenfeima avatar Nov 03 '17 12:11 chenfeima

@chenfeima If you only test it with the given model, It actually will be 85%. And if you fine-tune it (only fine-tune the classifier layers), you will get 83% and I don't know why.

chenbinghui1 avatar Nov 05 '17 01:11 chenbinghui1

@jens25 Thank you very much!

chenfeima avatar Nov 06 '17 15:11 chenfeima

@jens25 I want to know how to Initialize the net ? If only one vgg19, I konw "--weights=caffemodel" But there are 3. I don't konow how to use the same caffemodel to Initialize it?

chenfeima avatar Nov 09 '17 13:11 chenfeima

@chenfeima Hello, have you solve the initialization problem? I initialized the network by setting weight sharing in the train***.prototxt and saved the model after the network have been initialized before training. Then using this caffe model as my pre-trained model. could anyone tell me whether I'm correct?

Zyj061 avatar Dec 25 '17 11:12 Zyj061

@Zyj061Using the python Interface:1. read caffemodels. 2. copy the params to new caffemodel by layer names.

chenfeima avatar Dec 25 '17 13:12 chenfeima

@chenbinghui1 How can I add attentioncrop layer and rankloss in caffe.proto?I need some help about message parameter compile in caffe.proto

cocowf avatar Dec 28 '17 11:12 cocowf

@jens25 Hi, jens25. I have followed your steps, but it didn't work. The model just never converge. I skipped the initialization with the VGG weights and used Adam with lr=1e-4. It ran on the CUB_200_2011 for about 10 epochs with loss1, loss2, loss3 floating about 5.

What may cause this situation? Ang suggestion will be appreciated.

lhCheung1991 avatar Mar 08 '18 13:03 lhCheung1991

@lhCheung1991 are you also trying to reproduce the result? Maybe we can talk together offline

ouceduxzk avatar Mar 10 '18 10:03 ouceduxzk

@ouceduxzk I am very appreciated for your message. I noticed that you have made some effort on re-implementing this paper. I am looking forward to communicating with you for some details.

lhCheung1991 avatar Mar 10 '18 14:03 lhCheung1991

Hi,@chenbinghui1 Could you please tell me how you prepared your test data when testing the pretrained RA_CNN model? I can only get 74% accuracy using the available pretrained model,and I don't know why.

jackshaw avatar Jul 16 '18 07:07 jackshaw

@jackshaw Hi, can you leave a contact to me (maybe qq)? My email address is [email protected] I have some trouble in start training the model. thank you very much!

ProblemTryer avatar Jul 19 '18 14:07 ProblemTryer

@lhCheung1991 I met the same problem with you, the loss float arount 5.2. Do you know what cause the problem, and how can solve this problem. Thank you very much!

yuqiu1233 avatar Dec 18 '18 03:12 yuqiu1233