CLIP-Caption-Reward icon indicating copy to clipboard operation
CLIP-Caption-Reward copied to clipboard

Config file and performance reproduce

Open 232525 opened this issue 3 years ago • 11 comments

I re-train (8 V100) the mle phase using your released config file of configs/phase1/clipRN50_mle.yml, but the performance is lower than reported in the paper (CIDEr: 106.5 v.s 110.3). Does the config file correspond to the reported experiment in the paper? image

The warmup step is set to 20000 in the config file, is it too large? The learning rate has been rising during the full training phase (just warmup without decreasing). image

232525 avatar Aug 31 '22 10:08 232525

Hi, the config file is adapted from the original config file of CLIP-RN50 transformer model. (https://github.com/clip-vil/CLIP-ViL/blob/master/CLIP-ViL-Direct/caption/configs/phrase1/transformer.yml). I only edited it with larger batch sizes and fp16 for faster training. Since I didn't care about the warmup parameters, I didn't actually notice that the learning rate was not fully warmed up. I am not entirely sure about the lower score issue at the moment. For your purpose, maybe you can run the training with smaller warmup steps.

j-min avatar Aug 31 '22 23:08 j-min

Thanks for your reply. Would you please provide the wandb output.log file of your training process?

232525 avatar Sep 01 '22 02:09 232525

Back then I didn't use wandb, so I don't have log files for that run, sorry.

j-min avatar Sep 01 '22 02:09 j-min

Sorry for another question, the training settings reported in your paper:

We train our model with MLE objective for 15 epochs and further train with different rewards for 25 epochs (total 40 epochs), which takes within 1 day with 8 V100 GPUs.

But I notice that the max_epoch is set to 25 in your first phase config file.

232525 avatar Sep 01 '22 03:09 232525

I just remember that I actually ran the original CLIP-ViL training script to run the MLE model. Could you please run with the same batch size=10 for 25 epochs following https://github.com/clip-vil/CLIP-ViL/blob/master/CLIP-ViL-Direct/caption/configs/phrase1/transformer.yml?

j-min avatar Sep 01 '22 03:09 j-min

I just remember that I actually ran the original CLIP-ViL training script to run the MLE model. Could you please run with the same batch size=10 for 25 epochs following https://github.com/clip-vil/CLIP-ViL/blob/master/CLIP-ViL-Direct/caption/configs/phrase1/transformer.yml?

With a single GPU?

232525 avatar Sep 01 '22 03:09 232525

Yes

j-min avatar Sep 01 '22 03:09 j-min

Yes

OK, I will try soon. Thank you again.

232525 avatar Sep 01 '22 03:09 232525

For multi-gpus, I guess you could get the similar performance with fewer warmup steps, such as 1000 steps.

j-min avatar Sep 01 '22 03:09 j-min

For multi-gpus, I guess you could get the similar performance with fewer warmup steps, such as 1000 steps.

Yes, I have tried with warmup steps of 1250, and the first phase reaches CIDEr 109.2. But the second phase (CIDEr RL with fix lr 2.5e-6) is worse (CIDEr 121.6 v.s 124.9 in paper).

232525 avatar Sep 01 '22 03:09 232525

Here I attach the output.log for the CIDER run. I used the same configuration (8 V100s, 25 batch size at each GPU) as the current config file.

cider_output.log

j-min avatar Sep 01 '22 03:09 j-min