TextguidedATT icon indicating copy to clipboard operation
TextguidedATT copied to clipboard

Issue with model checkpoint loading in eval_res_att_knn_test5000.lua

Open dastan92 opened this issue 7 years ago • 7 comments

Hi Jonghwan

Thank you for your prompt responses. I have the test features and want to evaluate them. But I face the following issue while running eval_res_att_knn_test5000.lua for loading the model checkpoint. Please see the errors below.

ln: failed to create symbolic link './misc': File exists ln: failed to create symbolic link './data': File exists ln: failed to create symbolic link './model': File exists ln: failed to create symbolic link './layers': File exists ln: failed to create symbolic link './coco-caption': File exists Load img info file : data/coco/cocotalk_trainval_img_info.json Load kNN caps info file : data/coco/10NN_cap_valtrainall_cider.json Load caption label info file : data/coco/cocotalk_cap_label.h5 Load img feat file : data/resnet101_conv_feat_448/ Load cap feat file : data/skipthought/cocotalk_trainval_skipthought.h5 sIdx (5001) | eIdx (10000) initializing weights from model/textGuideAtt/res_textGuideAtt.t7 model/textGuideAtt/res_textGuideAtt.t7 /home/ubuntu/src/torch/install/bin/luajit: cannot open <model/textGuideAtt/res_textGuideAtt.t7> in mode r at /home/ubuntu/src/torch/pkg/torch/lib/TH/THDiskFile.c:673 stack traceback: [C]: at 0x7f453fe5f460 [C]: in function 'DiskFile' /home/ubuntu/src/torch/install/share/lua/5.1/torch/File.lua:405: in function 'load' eval_res_att_knn_test5000.lua:105: in main chunk [C]: in function 'dofile' .../src/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x00405d50

dastan92 avatar Nov 09 '17 07:11 dastan92

Figured it out. Your path was textGuideAtt instead of textGuidedAtt.

dastan92 avatar Nov 09 '17 07:11 dastan92

Sorry for typo. I will revise it. Thanks!

JonghwanMun avatar Nov 09 '17 07:11 JonghwanMun

Thanks, I finally got to run it. Can you tell me where I can find the metric scores - bleu, meteor etc? Or ranked captions? The files in my '002_inference/output' folder, 'res_rank_1_cap_test5000.json' and 'res_ranked_caps_test5000.json' do not contain any data. In the prediction_result folder under resnet I can access the 10 un-ranked captions (on which beam search has not been applied).

In any case, thanks for this repo!

dastan92 avatar Nov 09 '17 09:11 dastan92

Evaluation scores with metrics are calculated in line 118-124 "ranking_caps.py" To access the 118-124 lines, "eval_after_rerank" should be on and "isTest" is off.

Run following line. stdbuf -oL python ranking_caps.py
-NN_info_path data/coco/all_consensus_cap_test5000_cider.json
-prediction_path resNet/prediction_result/res_predictions_10NN_test5000.json
-output_ranked_caps output/res_ranked_caps_test5000.json
-output_rank_1_cap output/res_rank_1_cap_test5000.json
-eval_after_rerank 2>&1 | tee log_run_inference_test5000.log

But, if you get empty files of 'res_rank_1_cap_test5000.json' and 'res_ranked_caps_test5000.json', I think there must be error. Can you check whether an error occurs?

JonghwanMun avatar Nov 09 '17 10:11 JonghwanMun

I didn't really get any errors while running - I'll check again.

Ok cool. Although, I made my own script using the pycocotools library to evaluate the metrics. You've been so helpful - thank you for that - I really appreciate you taking the time!! I think I got my baseline running.

dastan92 avatar Nov 09 '17 10:11 dastan92

@dastan92 Hey, have you replicate the result in the paper?

vanpersie32 avatar Jan 27 '18 09:01 vanpersie32

Yeah - so I was able to generate the captions. I wrote my own evaluation metrics script. Also, I did not use lattice search to find the best caption out of the 10 - I just took the first. This was one day before a course deadline for a baseline so had to work with whatever I could get.

dastan92 avatar Jan 29 '18 23:01 dastan92