Recurrent-Attention-CNN
Recurrent-Attention-CNN copied to clipboard
Some questions about training and testing
when will you open the solver.prototxt file for training and the test when the synset.txt file for cub-200 to test the model?
@YiLiangNie Hi,did you train the model?
@super-wcg I froze the apn nets, randomly initialized 4 classification layers for 3 scales and a fusion scale, then fine-tune the model based on given one with 1e-4 learning rate, while i only got 81.2% on CUB. If i froze all the apn and conv layers, only trained the classification layers, i got 83.5% not 85% as in paper. What about you?
@chenbinghui1 How to do this? I need your help to run the project.
- Can this project run in Linux?
- I have download the dataset "CUB_200_2011". But I don't know what should I do next? If you can write a details doc, Which is much better. My e-mail address is [email protected] I really need you help. Thank you very much!
@chenfeima Here is what I did, in order to train the network. If I get the paper right, the training consists of three steps. I skipped the initialization with the vgg weights and the reinforcement learning part, because reinforcement learning algorithms are not part of caffe. So I created three different train_val.prototxt files. I used the first one to train the scaling subnets and freezed all other layers. The second one is used to train the attention proposal networks with a ranking loss to convergence. A sample implementation of this layer can be found here: https://github.com/wanji/caffe-sl/blob/master/src/caffe/layers/pairwise_ranking_loss_layer.cpp In this second stage all scaling layers are freezed. In the final training stage all layers are freezed and the final output layers of the network are trained, which combine the output of the different scalings. I made a gist, with the training prototxt files belonging to this three training steps. You may use it as a starting point. https://gist.github.com/jens25/6b0ea1143599fb99bd499a08dd5c072c
@jens25 I have some questions. (1)What's your final results? Does it close to 85%? (2)As shown in your prototxt, the final classifier for each scale is a 100-class classifier? While in cub there are 200 classes.(3)I think directly fine-tune on the given model with freezing all the layers except for the classifier layers, i.e. your stage-3, the result should be close to 85%, while in fact it only has 83%.
@chenbinghui1 I don't have any results. I just created the prototxt files, in order to train the network on a custom dataset. I didn't evaluated it on the bird dataset now. Maybe a direct finetuning of the model will give you better results, than this approach.
@jens25 I download the dataset "CUB_200_2011", but I can not Transform it to lmdb. Can you give a script to transform?
@chenbinghui1 I run the test net, also get the result 83%. Do you know why? Have you got the 85%?
@chenfeima If you only test it with the given model, It actually will be 85%. And if you fine-tune it (only fine-tune the classifier layers), you will get 83% and I don't know why.
@jens25 Thank you very much!
@jens25 I want to know how to Initialize the net ? If only one vgg19, I konw "--weights=caffemodel" But there are 3. I don't konow how to use the same caffemodel to Initialize it?
@chenfeima Hello, have you solve the initialization problem? I initialized the network by setting weight sharing in the train***.prototxt and saved the model after the network have been initialized before training. Then using this caffe model as my pre-trained model. could anyone tell me whether I'm correct?
@Zyj061Using the python Interface:1. read caffemodels. 2. copy the params to new caffemodel by layer names.
@chenbinghui1 How can I add attentioncrop layer and rankloss in caffe.proto?I need some help about message parameter compile in caffe.proto
@jens25 Hi, jens25. I have followed your steps, but it didn't work. The model just never converge. I skipped the initialization with the VGG weights and used Adam with lr=1e-4. It ran on the CUB_200_2011 for about 10 epochs with loss1, loss2, loss3 floating about 5.
What may cause this situation? Ang suggestion will be appreciated.
@lhCheung1991 are you also trying to reproduce the result? Maybe we can talk together offline
@ouceduxzk I am very appreciated for your message. I noticed that you have made some effort on re-implementing this paper. I am looking forward to communicating with you for some details.
Hi,@chenbinghui1 Could you please tell me how you prepared your test data when testing the pretrained RA_CNN model? I can only get 74% accuracy using the available pretrained model,and I don't know why.
@jackshaw Hi, can you leave a contact to me (maybe qq)? My email address is [email protected] I have some trouble in start training the model. thank you very much!
@lhCheung1991 I met the same problem with you, the loss float arount 5.2. Do you know what cause the problem, and how can solve this problem. Thank you very much!