mPLUG-2 icon indicating copy to clipboard operation
mPLUG-2 copied to clipboard

Different results from paper

Open lbx73737373 opened this issue 1 year ago • 5 comments

Hi, thank you for your great job! I'm reproducing MSRVTT captioning results using the fine-tuned weights you provided in the repo(mPLUG2_ MSRVTT_Caption.pth downloaded from the link), but I cannot get the result reported in the paper, and there is a huge gap. What problem could it be? Thanks!

My results: {'Bleu_1': 0.2391483871053033, 'Bleu_2': 0.1397145198812077, 'Bleu_3': 0.08582614908051771, 'Bleu_4': 0.0554141450685924, 'CIDEr': 0.6409439525382706}

More information:

  • using checkpoint mPLUG2_ MSRVTT_Caption.pth downloaded from the link
  • using language_evaluation package from https://github.com/bckim92/language-evaluation
  • using MSRVTT-test-1ka split, also called JSFUSION split, which is the same split in text-to-video-retrieval task

My eval logs:

| distributed init (rank 3): env://
| distributed init (rank 1): env://
| distributed init (rank 2): env://
| distributed init (rank 0): env://
Creating video caption datasets
Creating model
use_checkpoint:  True
_IncompatibleKeys(missing_keys=['visual.transformer.resblocks.0.lmhra1.ln.weight', 'visual.transformer.resblocks.0.lmhra1.ln.bias', 'visual.transformer.resblocks.0.lmhra1.down_proj.weight', 'visual.transformer.resblocks.0.lmhra1.down_proj.bias', 'visual.transformer.resblocks.0.lmhra1.conv.weight', 'visual.transformer.resblocks.0.lmhra1.conv.bias', 'visual.transformer.resblocks.0.lmhra1.up_proj.weight', 'visual.transformer.resblocks.0.lmhra1.up_proj.bias', 'visual.transformer.resblocks.0.lmhra2.ln.weight', 'visual.transformer.resblocks.0.lmhra2.ln.bias', 'visual.transformer.resblocks.0.lmhra2.down_proj.weight', 'visual.transformer.resblocks.0.lmhra2.down_proj.bias', 'visual.transformer.resblocks.0.lmhra2.conv.weight', 'visual.transformer.resblocks.0.lmhra2.conv.bias', 'visual.transformer.resblocks.0.lmhra2.up_proj.weight', 'visual.transformer.resblocks.0.lmhra2.up_proj.bias', 'visual.transformer.resblocks.1.lmhra1.ln.weight', 'visual.transformer.resblocks.1.lmhra1.ln.bias', 'visual.transformer.resblocks.1.lmhra1.down_proj.weight', 'visual.transformer.resblocks.1.lmhra1.down_proj.bias', 'visual.transformer.resblocks.1.lmhra1.conv.weight', 'visual.transformer.resblocks.1.lmhra1.conv.bias', 'visual.transformer.resblocks.1.lmhra1.up_proj.weight', 'visual.transformer.resblocks.1.lmhra1.up_proj.bias', 'visual.transformer.resblocks.1.lmhra2.ln.weight', 'visual.transformer.resblocks.1.lmhra2.ln.bias', 'visual.transformer.resblocks.1.lmhra2.down_proj.weight', 'visual.transformer.resblocks.1.lmhra2.down_proj.bias', 'visual.transformer.resblocks.1.lmhra2.conv.weight', 'visual.transformer.resblocks.1.lmhra2.conv.bias', 'visual.transformer.resblocks.1.lmhra2.up_proj.weight', 'visual.transformer.resblocks.1.lmhra2.up_proj.bias', 'visual.transformer.resblocks.2.lmhra1.ln.weight', 'visual.transformer.resblocks.2.lmhra1.ln.bias', 'visual.transformer.resblocks.2.lmhra1.down_proj.weight', 'visual.transformer.resblocks.2.lmhra1.down_proj.bias', 'visual.transformer.resblocks.2.lmhra1.conv.weight', 'visual.transformer.resblocks.2.lmhra1.conv.bias', 'visual.transformer.resblocks.2.lmhra1.up_proj.weight', 'visual.transformer.resblocks.2.lmhra1.up_proj.bias', 'visual.transformer.resblocks.2.lmhra2.ln.weight', 'visual.transformer.resblocks.2.lmhra2.ln.bias', 'visual.transformer.resblocks.2.lmhra2.down_proj.weight', 'visual.transformer.resblocks.2.lmhra2.down_proj.bias', 'visual.transformer.resblocks.2.lmhra2.conv.weight', 'visual.transformer.resblocks.2.lmhra2.conv.bias', 'visual.transformer.resblocks.2.lmhra2.up_proj.weight', 'visual.transformer.resblocks.2.lmhra2.up_proj.bias', 'visual.transformer.resblocks.3.lmhra1.ln.weight', 'visual.transformer.resblocks.3.lmhra1.ln.bias', 'visual.transformer.resblocks.3.lmhra1.down_proj.weight', 'visual.transformer.resblocks.3.lmhra1.down_proj.bias', 'visual.transformer.resblocks.3.lmhra1.conv.weight', 'visual.transformer.resblocks.3.lmhra1.conv.bias', 'visual.transformer.resblocks.3.lmhra1.up_proj.weight', 'visual.transformer.resblocks.3.lmhra1.up_proj.bias', 'visual.transformer.resblocks.3.lmhra2.ln.weight', 'visual.transformer.resblocks.3.lmhra2.ln.bias', 'visual.transformer.resblocks.3.lmhra2.down_proj.weight', 'visual.transformer.resblocks.3.lmhra2.down_proj.bias', 'visual.transformer.resblocks.3.lmhra2.conv.weight', 'visual.transformer.resblocks.3.lmhra2.conv.bias', 'visual.transformer.resblocks.3.lmhra2.up_proj.weight', 'visual.transformer.resblocks.3.lmhra2.up_proj.bias', 'visual.transformer.resblocks.4.lmhra1.ln.weight', 'visual.transformer.resblocks.4.lmhra1.ln.bias', 'visual.transformer.resblocks.4.lmhra1.down_proj.weight', 'visual.transformer.resblocks.4.lmhra1.down_proj.bias', 'visual.transformer.resblocks.4.lmhra1.conv.weight', 'visual.transformer.resblocks.4.lmhra1.conv.bias', 'visual.transformer.resblocks.4.lmhra1.up_proj.weight', 'visual.transformer.resblocks.4.lmhra1.up_proj.bias', 'visual.transformer.resblocks.4.lmhra2.ln.weight', 'visual.transformer.resblocks.4.lmhra2.ln.bias', 'visual.transformer.resblocks.4.lmhra2.down_proj.weight', 'visual.transformer.resblocks.4.lmhra2.down_proj.bias', 'visual.transformer.resblocks.4.lmhra2.conv.weight', 'visual.transformer.resblocks.4.lmhra2.conv.bias', 'visual.transformer.resblocks.4.lmhra2.up_proj.weight', 'visual.transformer.resblocks.4.lmhra2.up_proj.bias', 'visual.transformer.resblocks.5.lmhra1.ln.weight', 'visual.transformer.resblocks.5.lmhra1.ln.bias', 'visual.transformer.resblocks.5.lmhra1.down_proj.weight', 'visual.transformer.resblocks.5.lmhra1.down_proj.bias', 'visual.transformer.resblocks.5.lmhra1.conv.weight', 'visual.transformer.resblocks.5.lmhra1.conv.bias', 'visual.transformer.resblocks.5.lmhra1.up_proj.weight', 'visual.transformer.resblocks.5.lmhra1.up_proj.bias', 'visual.transformer.resblocks.5.lmhra2.ln.weight', 'visual.transformer.resblocks.5.lmhra2.ln.bias', 'visual.transformer.resblocks.5.lmhra2.down_proj.weight', 'visual.transformer.resblocks.5.lmhra2.down_proj.bias', 'visual.transformer.resblocks.5.lmhra2.conv.weight', 'visual.transformer.resblocks.5.lmhra2.conv.bias', 'visual.transformer.resblocks.5.lmhra2.up_proj.weight', 'visual.transformer.resblocks.5.lmhra2.up_proj.bias', 'visual.transformer.resblocks.6.lmhra1.ln.weight', 'visual.transformer.resblocks.6.lmhra1.ln.bias', 'visual.transformer.resblocks.6.lmhra1.down_proj.weight', 'visual.transformer.resblocks.6.lmhra1.down_proj.bias', 'visual.transformer.resblocks.6.lmhra1.conv.weight', 'visual.transformer.resblocks.6.lmhra1.conv.bias', 'visual.transformer.resblocks.6.lmhra1.up_proj.weight', 'visual.transformer.resblocks.6.lmhra1.up_proj.bias', 'visual.transformer.resblocks.6.lmhra2.ln.weight', 'visual.transformer.resblocks.6.lmhra2.ln.bias', 'visual.transformer.resblocks.6.lmhra2.down_proj.weight', 'visual.transformer.resblocks.6.lmhra2.down_proj.bias', 'visual.transformer.resblocks.6.lmhra2.conv.weight', 'visual.transformer.resblocks.6.lmhra2.conv.bias', 'visual.transformer.resblocks.6.lmhra2.up_proj.weight', 'visual.transformer.resblocks.6.lmhra2.up_proj.bias', 'visual.transformer.resblocks.7.lmhra1.ln.weight', 'visual.transformer.resblocks.7.lmhra1.ln.bias', 'visual.transformer.resblocks.7.lmhra1.down_proj.weight', 'visual.transformer.resblocks.7.lmhra1.down_proj.bias', 'visual.transformer.resblocks.7.lmhra1.conv.weight', 'visual.transformer.resblocks.7.lmhra1.conv.bias', 'visual.transformer.resblocks.7.lmhra1.up_proj.weight', 'visual.transformer.resblocks.7.lmhra1.up_proj.bias', 'visual.transformer.resblocks.7.lmhra2.ln.weight', 'visual.transformer.resblocks.7.lmhra2.ln.bias', 'visual.transformer.resblocks.7.lmhra2.down_proj.weight', 'visual.transformer.resblocks.7.lmhra2.down_proj.bias', 'visual.transformer.resblocks.7.lmhra2.conv.weight', 'visual.transformer.resblocks.7.lmhra2.conv.bias', 'visual.transformer.resblocks.7.lmhra2.up_proj.weight', 'visual.transformer.resblocks.7.lmhra2.up_proj.bias', 'visual.transformer.resblocks.8.lmhra1.ln.weight', 'visual.transformer.resblocks.8.lmhra1.ln.bias', 'visual.transformer.resblocks.8.lmhra1.down_proj.weight', 'visual.transformer.resblocks.8.lmhra1.down_proj.bias', 'visual.transformer.resblocks.8.lmhra1.conv.weight', 'visual.transformer.resblocks.8.lmhra1.conv.bias', 'visual.transformer.resblocks.8.lmhra1.up_proj.weight', 'visual.transformer.resblocks.8.lmhra1.up_proj.bias', 'visual.transformer.resblocks.8.lmhra2.ln.weight', 'visual.transformer.resblocks.8.lmhra2.ln.bias', 'visual.transformer.resblocks.8.lmhra2.down_proj.weight', 'visual.transformer.resblocks.8.lmhra2.down_proj.bias', 'visual.transformer.resblocks.8.lmhra2.conv.weight', 'visual.transformer.resblocks.8.lmhra2.conv.bias', 'visual.transformer.resblocks.8.lmhra2.up_proj.weight', 'visual.transformer.resblocks.8.lmhra2.up_proj.bias', 'visual.transformer.resblocks.9.lmhra1.ln.weight', 'visual.transformer.resblocks.9.lmhra1.ln.bias', 'visual.transformer.resblocks.9.lmhra1.down_proj.weight', 'visual.transformer.resblocks.9.lmhra1.down_proj.bias', 'visual.transformer.resblocks.9.lmhra1.conv.weight', 'visual.transformer.resblocks.9.lmhra1.conv.bias', 'visual.transformer.resblocks.9.lmhra1.up_proj.weight', 'visual.transformer.resblocks.9.lmhra1.up_proj.bias', 'visual.transformer.resblocks.9.lmhra2.ln.weight', 'visual.transformer.resblocks.9.lmhra2.ln.bias', 'visual.transformer.resblocks.9.lmhra2.down_proj.weight', 'visual.transformer.resblocks.9.lmhra2.down_proj.bias', 'visual.transformer.resblocks.9.lmhra2.conv.weight', 'visual.transformer.resblocks.9.lmhra2.conv.bias', 'visual.transformer.resblocks.9.lmhra2.up_proj.weight', 'visual.transformer.resblocks.9.lmhra2.up_proj.bias', 'visual.transformer.resblocks.10.lmhra1.ln.weight', 'visual.transformer.resblocks.10.lmhra1.ln.bias', 'visual.transformer.resblocks.10.lmhra1.down_proj.weight', 'visual.transformer.resblocks.10.lmhra1.down_proj.bias', 'visual.transformer.resblocks.10.lmhra1.conv.weight', 'visual.transformer.resblocks.10.lmhra1.conv.bias', 'visual.transformer.resblocks.10.lmhra1.up_proj.weight', 'visual.transformer.resblocks.10.lmhra1.up_proj.bias', 'visual.transformer.resblocks.10.lmhra2.ln.weight', 'visual.transformer.resblocks.10.lmhra2.ln.bias', 'visual.transformer.resblocks.10.lmhra2.down_proj.weight', 'visual.transformer.resblocks.10.lmhra2.down_proj.bias', 'visual.transformer.resblocks.10.lmhra2.conv.weight', 'visual.transformer.resblocks.10.lmhra2.conv.bias', 'visual.transformer.resblocks.10.lmhra2.up_proj.weight', 'visual.transformer.resblocks.10.lmhra2.up_proj.bias', 'visual.transformer.resblocks.11.lmhra1.ln.weight', 'visual.transformer.resblocks.11.lmhra1.ln.bias', 'visual.transformer.resblocks.11.lmhra1.down_proj.weight', 'visual.transformer.resblocks.11.lmhra1.down_proj.bias', 'visual.transformer.resblocks.11.lmhra1.conv.weight', 'visual.transformer.resblocks.11.lmhra1.conv.bias', 'visual.transformer.resblocks.11.lmhra1.up_proj.weight', 'visual.transformer.resblocks.11.lmhra1.up_proj.bias', 'visual.transformer.resblocks.11.lmhra2.ln.weight', 'visual.transformer.resblocks.11.lmhra2.ln.bias', 'visual.transformer.resblocks.11.lmhra2.down_proj.weight', 'visual.transformer.resblocks.11.lmhra2.down_proj.bias', 'visual.transformer.resblocks.11.lmhra2.conv.weight', 'visual.transformer.resblocks.11.lmhra2.conv.bias', 'visual.transformer.resblocks.11.lmhra2.up_proj.weight', 'visual.transformer.resblocks.11.lmhra2.up_proj.bias', 'visual.transformer.resblocks.12.lmhra1.ln.weight', 'visual.transformer.resblocks.12.lmhra1.ln.bias', 'visual.transformer.resblocks.12.lmhra1.down_proj.weight', 'visual.transformer.resblocks.12.lmhra1.down_proj.bias', 'visual.transformer.resblocks.12.lmhra1.conv.weight', 'visual.transformer.resblocks.12.lmhra1.conv.bias', 'visual.transformer.resblocks.12.lmhra1.up_proj.weight', 'visual.transformer.resblocks.12.lmhra1.up_proj.bias', 'visual.transformer.resblocks.12.lmhra2.ln.weight', 'visual.transformer.resblocks.12.lmhra2.ln.bias', 'visual.transformer.resblocks.12.lmhra2.down_proj.weight', 'visual.transformer.resblocks.12.lmhra2.down_proj.bias', 'visual.transformer.resblocks.12.lmhra2.conv.weight', 'visual.transformer.resblocks.12.lmhra2.conv.bias', 'visual.transformer.resblocks.12.lmhra2.up_proj.weight', 'visual.transformer.resblocks.12.lmhra2.up_proj.bias', 'visual.transformer.resblocks.13.lmhra1.ln.weight', 'visual.transformer.resblocks.13.lmhra1.ln.bias', 'visual.transformer.resblocks.13.lmhra1.down_proj.weight', 'visual.transformer.resblocks.13.lmhra1.down_proj.bias', 'visual.transformer.resblocks.13.lmhra1.conv.weight', 'visual.transformer.resblocks.13.lmhra1.conv.bias', 'visual.transformer.resblocks.13.lmhra1.up_proj.weight', 'visual.transformer.resblocks.13.lmhra1.up_proj.bias', 'visual.transformer.resblocks.13.lmhra2.ln.weight', 'visual.transformer.resblocks.13.lmhra2.ln.bias', 'visual.transformer.resblocks.13.lmhra2.down_proj.weight', 'visual.transformer.resblocks.13.lmhra2.down_proj.bias', 'visual.transformer.resblocks.13.lmhra2.conv.weight', 'visual.transformer.resblocks.13.lmhra2.conv.bias', 'visual.transformer.resblocks.13.lmhra2.up_proj.weight', 'visual.transformer.resblocks.13.lmhra2.up_proj.bias', 'visual.transformer.resblocks.14.lmhra1.ln.weight', 'visual.transformer.resblocks.14.lmhra1.ln.bias', 'visual.transformer.resblocks.14.lmhra1.down_proj.weight', 'visual.transformer.resblocks.14.lmhra1.down_proj.bias', 'visual.transformer.resblocks.14.lmhra1.conv.weight', 'visual.transformer.resblocks.14.lmhra1.conv.bias', 'visual.transformer.resblocks.14.lmhra1.up_proj.weight', 'visual.transformer.resblocks.14.lmhra1.up_proj.bias', 'visual.transformer.resblocks.14.lmhra2.ln.weight', 'visual.transformer.resblocks.14.lmhra2.ln.bias', 'visual.transformer.resblocks.14.lmhra2.down_proj.weight', 'visual.transformer.resblocks.14.lmhra2.down_proj.bias', 'visual.transformer.resblocks.14.lmhra2.conv.weight', 'visual.transformer.resblocks.14.lmhra2.conv.bias', 'visual.transformer.resblocks.14.lmhra2.up_proj.weight', 'visual.transformer.resblocks.14.lmhra2.up_proj.bias', 'visual.transformer.resblocks.15.lmhra1.ln.weight', 'visual.transformer.resblocks.15.lmhra1.ln.bias', 'visual.transformer.resblocks.15.lmhra1.down_proj.weight', 'visual.transformer.resblocks.15.lmhra1.down_proj.bias', 'visual.transformer.resblocks.15.lmhra1.conv.weight', 'visual.transformer.resblocks.15.lmhra1.conv.bias', 'visual.transformer.resblocks.15.lmhra1.up_proj.weight', 'visual.transformer.resblocks.15.lmhra1.up_proj.bias', 'visual.transformer.resblocks.15.lmhra2.ln.weight', 'visual.transformer.resblocks.15.lmhra2.ln.bias', 'visual.transformer.resblocks.15.lmhra2.down_proj.weight', 'visual.transformer.resblocks.15.lmhra2.down_proj.bias', 'visual.transformer.resblocks.15.lmhra2.conv.weight', 'visual.transformer.resblocks.15.lmhra2.conv.bias', 'visual.transformer.resblocks.15.lmhra2.up_proj.weight', 'visual.transformer.resblocks.15.lmhra2.up_proj.bias', 'visual.transformer.resblocks.16.lmhra1.ln.weight', 'visual.transformer.resblocks.16.lmhra1.ln.bias', 'visual.transformer.resblocks.16.lmhra1.down_proj.weight', 'visual.transformer.resblocks.16.lmhra1.down_proj.bias', 'visual.transformer.resblocks.16.lmhra1.conv.weight', 'visual.transformer.resblocks.16.lmhra1.conv.bias', 'visual.transformer.resblocks.16.lmhra1.up_proj.weight', 'visual.transformer.resblocks.16.lmhra1.up_proj.bias', 'visual.transformer.resblocks.16.lmhra2.ln.weight', 'visual.transformer.resblocks.16.lmhra2.ln.bias', 'visual.transformer.resblocks.16.lmhra2.down_proj.weight', 'visual.transformer.resblocks.16.lmhra2.down_proj.bias', 'visual.transformer.resblocks.16.lmhra2.conv.weight', 'visual.transformer.resblocks.16.lmhra2.conv.bias', 'visual.transformer.resblocks.16.lmhra2.up_proj.weight', 'visual.transformer.resblocks.16.lmhra2.up_proj.bias', 'visual.transformer.resblocks.17.lmhra1.ln.weight', 'visual.transformer.resblocks.17.lmhra1.ln.bias', 'visual.transformer.resblocks.17.lmhra1.down_proj.weight', 'visual.transformer.resblocks.17.lmhra1.down_proj.bias', 'visual.transformer.resblocks.17.lmhra1.conv.weight', 'visual.transformer.resblocks.17.lmhra1.conv.bias', 'visual.transformer.resblocks.17.lmhra1.up_proj.weight', 'visual.transformer.resblocks.17.lmhra1.up_proj.bias', 'visual.transformer.resblocks.17.lmhra2.ln.weight', 'visual.transformer.resblocks.17.lmhra2.ln.bias', 'visual.transformer.resblocks.17.lmhra2.down_proj.weight', 'visual.transformer.resblocks.17.lmhra2.down_proj.bias', 'visual.transformer.resblocks.17.lmhra2.conv.weight', 'visual.transformer.resblocks.17.lmhra2.conv.bias', 'visual.transformer.resblocks.17.lmhra2.up_proj.weight', 'visual.transformer.resblocks.17.lmhra2.up_proj.bias', 'visual.transformer.resblocks.18.lmhra1.ln.weight', 'visual.transformer.resblocks.18.lmhra1.ln.bias', 'visual.transformer.resblocks.18.lmhra1.down_proj.weight', 'visual.transformer.resblocks.18.lmhra1.down_proj.bias', 'visual.transformer.resblocks.18.lmhra1.conv.weight', 'visual.transformer.resblocks.18.lmhra1.conv.bias', 'visual.transformer.resblocks.18.lmhra1.up_proj.weight', 'visual.transformer.resblocks.18.lmhra1.up_proj.bias', 'visual.transformer.resblocks.18.lmhra2.ln.weight', 'visual.transformer.resblocks.18.lmhra2.ln.bias', 'visual.transformer.resblocks.18.lmhra2.down_proj.weight', 'visual.transformer.resblocks.18.lmhra2.down_proj.bias', 'visual.transformer.resblocks.18.lmhra2.conv.weight', 'visual.transformer.resblocks.18.lmhra2.conv.bias', 'visual.transformer.resblocks.18.lmhra2.up_proj.weight', 'visual.transformer.resblocks.18.lmhra2.up_proj.bias', 'visual.transformer.resblocks.19.lmhra1.ln.weight', 'visual.transformer.resblocks.19.lmhra1.ln.bias', 'visual.transformer.resblocks.19.lmhra1.down_proj.weight', 'visual.transformer.resblocks.19.lmhra1.down_proj.bias', 'visual.transformer.resblocks.19.lmhra1.conv.weight', 'visual.transformer.resblocks.19.lmhra1.conv.bias', 'visual.transformer.resblocks.19.lmhra1.up_proj.weight', 'visual.transformer.resblocks.19.lmhra1.up_proj.bias', 'visual.transformer.resblocks.19.lmhra2.ln.weight', 'visual.transformer.resblocks.19.lmhra2.ln.bias', 'visual.transformer.resblocks.19.lmhra2.down_proj.weight', 'visual.transformer.resblocks.19.lmhra2.down_proj.bias', 'visual.transformer.resblocks.19.lmhra2.conv.weight', 'visual.transformer.resblocks.19.lmhra2.conv.bias', 'visual.transformer.resblocks.19.lmhra2.up_proj.weight', 'visual.transformer.resblocks.19.lmhra2.up_proj.bias', 'visual.transformer.resblocks.20.lmhra1.ln.weight', 'visual.transformer.resblocks.20.lmhra1.ln.bias', 'visual.transformer.resblocks.20.lmhra1.down_proj.weight', 'visual.transformer.resblocks.20.lmhra1.down_proj.bias', 'visual.transformer.resblocks.20.lmhra1.conv.weight', 'visual.transformer.resblocks.20.lmhra1.conv.bias', 'visual.transformer.resblocks.20.lmhra1.up_proj.weight', 'visual.transformer.resblocks.20.lmhra1.up_proj.bias', 'visual.transformer.resblocks.20.lmhra2.ln.weight', 'visual.transformer.resblocks.20.lmhra2.ln.bias', 'visual.transformer.resblocks.20.lmhra2.down_proj.weight', 'visual.transformer.resblocks.20.lmhra2.down_proj.bias', 'visual.transformer.resblocks.20.lmhra2.conv.weight', 'visual.transformer.resblocks.20.lmhra2.conv.bias', 'visual.transformer.resblocks.20.lmhra2.up_proj.weight', 'visual.transformer.resblocks.20.lmhra2.up_proj.bias', 'visual.transformer.resblocks.21.lmhra1.ln.weight', 'visual.transformer.resblocks.21.lmhra1.ln.bias', 'visual.transformer.resblocks.21.lmhra1.down_proj.weight', 'visual.transformer.resblocks.21.lmhra1.down_proj.bias', 'visual.transformer.resblocks.21.lmhra1.conv.weight', 'visual.transformer.resblocks.21.lmhra1.conv.bias', 'visual.transformer.resblocks.21.lmhra1.up_proj.weight', 'visual.transformer.resblocks.21.lmhra1.up_proj.bias', 'visual.transformer.resblocks.21.lmhra2.ln.weight', 'visual.transformer.resblocks.21.lmhra2.ln.bias', 'visual.transformer.resblocks.21.lmhra2.down_proj.weight', 'visual.transformer.resblocks.21.lmhra2.down_proj.bias', 'visual.transformer.resblocks.21.lmhra2.conv.weight', 'visual.transformer.resblocks.21.lmhra2.conv.bias', 'visual.transformer.resblocks.21.lmhra2.up_proj.weight', 'visual.transformer.resblocks.21.lmhra2.up_proj.bias', 'visual.transformer.resblocks.22.lmhra1.ln.weight', 'visual.transformer.resblocks.22.lmhra1.ln.bias', 'visual.transformer.resblocks.22.lmhra1.down_proj.weight', 'visual.transformer.resblocks.22.lmhra1.down_proj.bias', 'visual.transformer.resblocks.22.lmhra1.conv.weight', 'visual.transformer.resblocks.22.lmhra1.conv.bias', 'visual.transformer.resblocks.22.lmhra1.up_proj.weight', 'visual.transformer.resblocks.22.lmhra1.up_proj.bias', 'visual.transformer.resblocks.22.lmhra2.ln.weight', 'visual.transformer.resblocks.22.lmhra2.ln.bias', 'visual.transformer.resblocks.22.lmhra2.down_proj.weight', 'visual.transformer.resblocks.22.lmhra2.down_proj.bias', 'visual.transformer.resblocks.22.lmhra2.conv.weight', 'visual.transformer.resblocks.22.lmhra2.conv.bias', 'visual.transformer.resblocks.22.lmhra2.up_proj.weight', 'visual.transformer.resblocks.22.lmhra2.up_proj.bias', 'visual.transformer.resblocks.23.lmhra1.ln.weight', 'visual.transformer.resblocks.23.lmhra1.ln.bias', 'visual.transformer.resblocks.23.lmhra1.down_proj.weight', 'visual.transformer.resblocks.23.lmhra1.down_proj.bias', 'visual.transformer.resblocks.23.lmhra1.conv.weight', 'visual.transformer.resblocks.23.lmhra1.conv.bias', 'visual.transformer.resblocks.23.lmhra1.up_proj.weight', 'visual.transformer.resblocks.23.lmhra1.up_proj.bias', 'visual.transformer.resblocks.23.lmhra2.ln.weight', 'visual.transformer.resblocks.23.lmhra2.ln.bias', 'visual.transformer.resblocks.23.lmhra2.down_proj.weight', 'visual.transformer.resblocks.23.lmhra2.down_proj.bias', 'visual.transformer.resblocks.23.lmhra2.conv.weight', 'visual.transformer.resblocks.23.lmhra2.conv.bias', 'visual.transformer.resblocks.23.lmhra2.up_proj.weight', 'visual.transformer.resblocks.23.lmhra2.up_proj.bias'], unexpected_keys=[])
train_step_per_epoch: 11250
Selected optimization level O1:  Insert automatic casts around Pytorch functions and Tensor methods.

Defaults for this optimization level are:
enabled                : True
opt_level              : O1
cast_model_type        : None
patch_torch_functions  : True
keep_batchnorm_fp32    : None
master_weights         : None
loss_scale             : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled                : True
opt_level              : O1
cast_model_type        : None
patch_torch_functions  : True
keep_batchnorm_fp32    : None
master_weights         : None
loss_scale             : dynamic
Warning:  multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback.  Original ImportError was: ModuleNotFoundError("No module named 'amp_C'")
load checkpoint from /home/lbx/MyHome/pretrained_model_weights/mPLUG-2/mPLUG2_MSRVTT_Caption.pth
<All keys matched successfully>
Warning:  apex was installed without --cpp_ext.  Falling back to Python flatten and unflatten.
Start training
[{'video_id': 'video9770', 'pred_caption': 'a boy is fixing a computer', 'gold_caption': 'a person is connecting something to system'}, {'video_id': 'video7026', 'pred_caption': 'a man is talking about a car', 'gold_caption': 'a man is giving a review on a vehicle'}, {'video_id': 'video9778', 'pred_caption': 'a boy is performing on the voice', 'gold_caption': 'a little boy singing in front of judges and crowd'}, {'video_id': 'video9772', 'pred_caption': 'a cartoon character is flying', 'gold_caption': 'some cartoon characters are moving around an area'}]
Generate Caption test result:  [ 0/63]  eta: 0:08:24    time: 8.0067  data: 5.3014  max mem: 16941
Generate Caption test result:  [50/63]  eta: 0:00:17    time: 1.1697  data: 0.0001  max mem: 17413
Generate Caption test result:  [62/63]  eta: 0:00:01    time: 1.1289  data: 0.0001  max mem: 17413
Generate Caption test result: Total time: 0:01:21 (1.2926 s / it)
result file saved to output/videocaption_msrvtt_4/result/caption_result_zeroshot.json
1000 {'Bleu_1': 0.2391483871053033, 'Bleu_2': 0.1397145198812077, 'Bleu_3': 0.08582614908051771, 'Bleu_4': 0.0554141450685924, 'CIDEr': 0.6409439525382706}
Training time 0:01:23

lbx73737373 avatar Jun 22 '24 08:06 lbx73737373

Are you doing inference or fine-tuning with a pre-trained model, I used the pre-trained model mPLUG_Pretrain.pth to train on a single A800 on the MSVD dataset and the MSRVTT dataset, and there is a big gap between that and the authors' results.

yangxingrui avatar Dec 10 '24 03:12 yangxingrui

When I run the inference,I got the same problem and can't reproduce the results in the paper.Have you solved this problem?

My results: {'Bleu_1': 0.8571608322523123, 'Bleu_2': 0.7494237174593057, 'Bleu_3': 0.6316935911470557, 'Bleu_4': 0.5164274153919042, 'METEOR': 0.32303834602466674, 'CIDEr': 0.6902345960625933, 'ROUGE_L': 0.6622619741489134, 'SPICE': 0.08053052885011674}

Implement details:

  • using checkpoint mPLUG2_ MSRVTT_Caption.pth downloaded from the link
  • using language_evaluation package from https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/mPLUG/language_evaluation.tar
  • using ViT-L-14 downloaded from the link
  • using bert-large-uncased downloaded from https://huggingface.co/google-bert/bert-large-uncased/tree/main
  • using JSFUSION split to make train/test.jsonl

I comment out print(msg) when initialize_clip_video using checkpoint,and it also has IncompatibleKeys

My logs:

| distributed init (rank 0): env://
Creating video caption datasets
Creating model
use_checkpoint:  True
train_step_per_epoch: 2250
Selected optimization level O1:  Insert automatic casts around Pytorch functions and Tensor methods.

Defaults for this optimization level are:
enabled                : True
opt_level              : O1
cast_model_type        : None
patch_torch_functions  : True
keep_batchnorm_fp32    : None
master_weights         : None
loss_scale             : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled                : True
opt_level              : O1
cast_model_type        : None
patch_torch_functions  : True
keep_batchnorm_fp32    : None
master_weights         : None
loss_scale             : dynamic
Warning:  multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback.  Original ImportError was: ModuleNotFoundError("No module named 'amp_C'")
load checkpoint from mPLUG2_MSRVTT_Caption.pth
<All keys matched successfully>
Warning:  apex was installed without --cpp_ext.  Falling back to Python flatten and unflatten.
Start training
2024-12-15 21:32:36.621719: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-15 21:32:36.701944: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-12-15 21:32:37.006462: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/sn/anaconda3/envs/lhj_swinbert/lib/python3.10/site-packages/cv2/../../lib64:/usr/local/cuda-11.8/lib64:/usr/local/cuda-11.8/lib64
2024-12-15 21:32:37.006497: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/sn/anaconda3/envs/lhj_swinbert/lib/python3.10/site-packages/cv2/../../lib64:/usr/local/cuda-11.8/lib64:/usr/local/cuda-11.8/lib64
2024-12-15 21:32:37.006500: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
[{'video_id': 'video7020', 'pred_caption': 'a woman is cutting a flower', 'gold_caption': ['a person is preparing some art', 'a person making stuff out of clay', 'a woman  creating a fondant baby and flower', 'a woman constructing a sculpture with playdoh', 'a woman creates a baby out of craft supplies', 'a woman creates some crafts', 'a woman is crafting with clay', 'a woman is making crafts', 'a woman is making crafts', 'a woman makes a craft project', 'a woman makes crafts', 'a woman makes realistic looking leaves and flowers for a cake', 'a woman places a leaf on some dough and cuts a flower out of a different piece of dough', 'a woman wraps a baby doll in some fake leaves', 'showing a craft work', 'someone is doing a craft', 'someone is placing a leaf on a fake fetus of a baby and then proceeds to cutting up a flower', 'someone is showing some art', 'woman is showing steps to making fondant characters', 'wrapping a fake boy with a leaf']}, {'video_id': 'video7021', 'pred_caption': 'people are playing baseball', 'gold_caption': ['a baseball batter hits the ball to the fence and a outfielder goes after it', 'a baseball game is played', 'a baseball game taking place', 'a baseball player hits a ball to the back of the field', 'a batter hitting a ball and players on a field', 'a man is calling a baseball game', 'a man is hitting the ball in a baseball game', 'a report about a baseball game', 'a sports announcer reports on a baseball game', 'baseball casting as a player runs after ball', 'baseball player hits ball', 'bunch of players playing baseball game', 'in the baseball match a player hits the ball and then celebrates with his team', 'men are playing baseball and one of the men hits a baseball a long distance', 'people playing a baseball game', 'people playing baseball and someone commentating', 'players hitting baseballs with bat', 'this is a highlight from a baseball game', 'this is vidio from a baseball game', 'this is a highlight from a baseball game']}, {'video_id': 'video7024', 'pred_caption': 'a person is playing with a toy cat', 'gold_caption': ['a toy cat is bathing in soapy water in a toy bathtub', 'small plastic cat being washed and rubbed in a plastic tub', 'there is a someone playing with a deer toy', 'a person pretends to bathe a small plastic cat using a bathroom play set of equally small size', 'the head of doll is being washed by someone in soapy water', 'there is a hand with pink nails cleaning a kitty toy in a miniature bathtub', 'a person bathing a toy cat in a little bath tub', 'a toy cat in a toy bubble bath is being washed and groomed', 'a toy kitten getting a bath in a white bath tub', 'a person cleans a plastic toy cat in a very small bathtub', 'a person puts toy cat in a tub and washes it with a brush', 'someone cleaning a toy cat in a water tub', 'a  toy cat is in small toy bathtub with water and bubble and girl washes with scrubber', 'little pet shop cat getting a bath and washed with little brush', 'girl giving her adorable toy kitty a nice bubble bath', 'the small toys are being showered in the small bathroom', 'woman is cleaning the head of the orange cat', 'woman is putting the orange doll to have shower', 'woman is putting the small cat into the water and cleaning it', 'pony doll with big green eyes taking a bath in tiny bathtub']}, {'video_id': 'video7025', 'pred_caption': 'a man is running', 'gold_caption': ['a boy running is running without dress', 'a child is running naked across the grass', 'a naked boy running on a beach then a picture of a man', 'a naked child runs through a field', 'a naked child runs through a field', 'a short clip showing people on a beach', 'a video of kids and adults on a beach', 'a woman is running', 'a woman running on grass', 'a young child is running naked on the beach', 'people are relaxing next to a lake', 'people are running around', 'people are running in a recreation area', 'playing and running on the beach', 'the girls swiming in the ocean', 'the video shows old footage of a man and also of a beach', 'this is a stillshot of a soldier', 'video of people on a nude beach', 'a woman is running', 'a young child is running naked on the beach']}]
Generate Caption test result:  [  0/250]  eta: 0:18:14    time: 4.3791  data: 1.7108  max mem: 16941
Generate Caption test result:  [ 50/250]  eta: 0:03:32    time: 0.9816  data: 0.0001  max mem: 17413
Generate Caption test result:  [100/250]  eta: 0:02:33    time: 0.9521  data: 0.0001  max mem: 17413
Generate Caption test result:  [150/250]  eta: 0:01:41    time: 0.9901  data: 0.0001  max mem: 17413
Generate Caption test result:  [200/250]  eta: 0:00:50    time: 0.9703  data: 0.0001  max mem: 17413
Generate Caption test result:  [249/250]  eta: 0:00:00    time: 0.9478  data: 0.0000  max mem: 17413
Generate Caption test result: Total time: 0:04:09 (0.9986 s / it)
result file saved to output/videoqa_msrvtt_1/result/caption_result_zeroshot.json
Parsing reference captions
Initiating Stanford parsing pipeline
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[main] INFO edu.stanford.nlp.pipeline.TokenizerAnnotator - TokenizerAnnotator: No tokenizer type provided. Defaulting to PTBTokenizer.
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator parse
[main] INFO edu.stanford.nlp.parser.common.ParserGrammar - Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... 
done [0.2 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [0.5 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [0.2 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.3 sec].
Threads( StanfordCoreNLP ) [18.398 seconds]
Threads( StanfordCoreNLP ) [16.854 seconds]
Parsing test captions
Threads( StanfordCoreNLP ) [0.993 seconds]
SPICE evaluation took: 40.51 s
1000 {'Bleu_1': 0.8571608322523123, 'Bleu_2': 0.7494237174593057, 'Bleu_3': 0.6316935911470557, 'Bleu_4': 0.5164274153919042, 'METEOR': 0.32303834602466674, 'CIDEr': 0.6902345960625933, 'ROUGE_L': 0.6622619741489134, 'SPICE': 0.08053052885011674}
Training time 0:04:53

Hi, thank you for your great job! I'm reproducing MSRVTT captioning results using the fine-tuned weights you provided in the repo(mPLUG2_ MSRVTT_Caption.pth downloaded from the link), but I cannot get the result reported in the paper, and there is a huge gap. What problem could it be? Thanks! My results: {'Bleu_1': 0.2391483871053033, 'Bleu_2': 0.1397145198812077, 'Bleu_3': 0.08582614908051771, 'Bleu_4': 0.0554141450685924, 'CIDEr': 0.6409439525382706}

DayNightLearner avatar Dec 16 '24 01:12 DayNightLearner

hello,

I cannot find "using checkpoint mPLUG2_ MSRVTT_Caption.pth downloaded from the link", that link!

any ideas?

regards

vassokouts avatar Apr 01 '25 07:04 vassokouts

hello,

I cannot find "using checkpoint mPLUG2_ MSRVTT_Caption.pth downloaded from the link", that link!

any ideas?

regards

Just in Readme also the direct link:http://tjfd2.oss-cn-zhangjiakou.aliyuncs.com/mplug2/mPLUG2_MSRVTT_Caption.pth

DayNightLearner avatar Jun 03 '25 02:06 DayNightLearner

When I run the inference,I got the same problem and can't reproduce the results in the paper.Have you solved this problem?

My results: {'Bleu_1': 0.8571608322523123, 'Bleu_2': 0.7494237174593057, 'Bleu_3': 0.6316935911470557, 'Bleu_4': 0.5164274153919042, 'METEOR': 0.32303834602466674, 'CIDEr': 0.6902345960625933, 'ROUGE_L': 0.6622619741489134, 'SPICE': 0.08053052885011674}

Implement details:

  • using checkpoint mPLUG2_ MSRVTT_Caption.pth downloaded from the link
  • using language_evaluation package from https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/mPLUG/language_evaluation.tar
  • using ViT-L-14 downloaded from the link
  • using bert-large-uncased downloaded from https://huggingface.co/google-bert/bert-large-uncased/tree/main
  • using JSFUSION split to make train/test.jsonl

I comment out print(msg) when initialize_clip_video using checkpoint,and it also has IncompatibleKeys

My logs:

| distributed init (rank 0): env://
Creating video caption datasets
Creating model
use_checkpoint:  True
train_step_per_epoch: 2250
Selected optimization level O1:  Insert automatic casts around Pytorch functions and Tensor methods.

Defaults for this optimization level are:
enabled                : True
opt_level              : O1
cast_model_type        : None
patch_torch_functions  : True
keep_batchnorm_fp32    : None
master_weights         : None
loss_scale             : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled                : True
opt_level              : O1
cast_model_type        : None
patch_torch_functions  : True
keep_batchnorm_fp32    : None
master_weights         : None
loss_scale             : dynamic
Warning:  multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback.  Original ImportError was: ModuleNotFoundError("No module named 'amp_C'")
load checkpoint from mPLUG2_MSRVTT_Caption.pth
<All keys matched successfully>
Warning:  apex was installed without --cpp_ext.  Falling back to Python flatten and unflatten.
Start training
2024-12-15 21:32:36.621719: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-15 21:32:36.701944: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-12-15 21:32:37.006462: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/sn/anaconda3/envs/lhj_swinbert/lib/python3.10/site-packages/cv2/../../lib64:/usr/local/cuda-11.8/lib64:/usr/local/cuda-11.8/lib64
2024-12-15 21:32:37.006497: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/sn/anaconda3/envs/lhj_swinbert/lib/python3.10/site-packages/cv2/../../lib64:/usr/local/cuda-11.8/lib64:/usr/local/cuda-11.8/lib64
2024-12-15 21:32:37.006500: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
[{'video_id': 'video7020', 'pred_caption': 'a woman is cutting a flower', 'gold_caption': ['a person is preparing some art', 'a person making stuff out of clay', 'a woman  creating a fondant baby and flower', 'a woman constructing a sculpture with playdoh', 'a woman creates a baby out of craft supplies', 'a woman creates some crafts', 'a woman is crafting with clay', 'a woman is making crafts', 'a woman is making crafts', 'a woman makes a craft project', 'a woman makes crafts', 'a woman makes realistic looking leaves and flowers for a cake', 'a woman places a leaf on some dough and cuts a flower out of a different piece of dough', 'a woman wraps a baby doll in some fake leaves', 'showing a craft work', 'someone is doing a craft', 'someone is placing a leaf on a fake fetus of a baby and then proceeds to cutting up a flower', 'someone is showing some art', 'woman is showing steps to making fondant characters', 'wrapping a fake boy with a leaf']}, {'video_id': 'video7021', 'pred_caption': 'people are playing baseball', 'gold_caption': ['a baseball batter hits the ball to the fence and a outfielder goes after it', 'a baseball game is played', 'a baseball game taking place', 'a baseball player hits a ball to the back of the field', 'a batter hitting a ball and players on a field', 'a man is calling a baseball game', 'a man is hitting the ball in a baseball game', 'a report about a baseball game', 'a sports announcer reports on a baseball game', 'baseball casting as a player runs after ball', 'baseball player hits ball', 'bunch of players playing baseball game', 'in the baseball match a player hits the ball and then celebrates with his team', 'men are playing baseball and one of the men hits a baseball a long distance', 'people playing a baseball game', 'people playing baseball and someone commentating', 'players hitting baseballs with bat', 'this is a highlight from a baseball game', 'this is vidio from a baseball game', 'this is a highlight from a baseball game']}, {'video_id': 'video7024', 'pred_caption': 'a person is playing with a toy cat', 'gold_caption': ['a toy cat is bathing in soapy water in a toy bathtub', 'small plastic cat being washed and rubbed in a plastic tub', 'there is a someone playing with a deer toy', 'a person pretends to bathe a small plastic cat using a bathroom play set of equally small size', 'the head of doll is being washed by someone in soapy water', 'there is a hand with pink nails cleaning a kitty toy in a miniature bathtub', 'a person bathing a toy cat in a little bath tub', 'a toy cat in a toy bubble bath is being washed and groomed', 'a toy kitten getting a bath in a white bath tub', 'a person cleans a plastic toy cat in a very small bathtub', 'a person puts toy cat in a tub and washes it with a brush', 'someone cleaning a toy cat in a water tub', 'a  toy cat is in small toy bathtub with water and bubble and girl washes with scrubber', 'little pet shop cat getting a bath and washed with little brush', 'girl giving her adorable toy kitty a nice bubble bath', 'the small toys are being showered in the small bathroom', 'woman is cleaning the head of the orange cat', 'woman is putting the orange doll to have shower', 'woman is putting the small cat into the water and cleaning it', 'pony doll with big green eyes taking a bath in tiny bathtub']}, {'video_id': 'video7025', 'pred_caption': 'a man is running', 'gold_caption': ['a boy running is running without dress', 'a child is running naked across the grass', 'a naked boy running on a beach then a picture of a man', 'a naked child runs through a field', 'a naked child runs through a field', 'a short clip showing people on a beach', 'a video of kids and adults on a beach', 'a woman is running', 'a woman running on grass', 'a young child is running naked on the beach', 'people are relaxing next to a lake', 'people are running around', 'people are running in a recreation area', 'playing and running on the beach', 'the girls swiming in the ocean', 'the video shows old footage of a man and also of a beach', 'this is a stillshot of a soldier', 'video of people on a nude beach', 'a woman is running', 'a young child is running naked on the beach']}]
Generate Caption test result:  [  0/250]  eta: 0:18:14    time: 4.3791  data: 1.7108  max mem: 16941
Generate Caption test result:  [ 50/250]  eta: 0:03:32    time: 0.9816  data: 0.0001  max mem: 17413
Generate Caption test result:  [100/250]  eta: 0:02:33    time: 0.9521  data: 0.0001  max mem: 17413
Generate Caption test result:  [150/250]  eta: 0:01:41    time: 0.9901  data: 0.0001  max mem: 17413
Generate Caption test result:  [200/250]  eta: 0:00:50    time: 0.9703  data: 0.0001  max mem: 17413
Generate Caption test result:  [249/250]  eta: 0:00:00    time: 0.9478  data: 0.0000  max mem: 17413
Generate Caption test result: Total time: 0:04:09 (0.9986 s / it)
result file saved to output/videoqa_msrvtt_1/result/caption_result_zeroshot.json
Parsing reference captions
Initiating Stanford parsing pipeline
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[main] INFO edu.stanford.nlp.pipeline.TokenizerAnnotator - TokenizerAnnotator: No tokenizer type provided. Defaulting to PTBTokenizer.
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator parse
[main] INFO edu.stanford.nlp.parser.common.ParserGrammar - Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... 
done [0.2 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [0.5 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [0.2 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.3 sec].
Threads( StanfordCoreNLP ) [18.398 seconds]
Threads( StanfordCoreNLP ) [16.854 seconds]
Parsing test captions
Threads( StanfordCoreNLP ) [0.993 seconds]
SPICE evaluation took: 40.51 s
1000 {'Bleu_1': 0.8571608322523123, 'Bleu_2': 0.7494237174593057, 'Bleu_3': 0.6316935911470557, 'Bleu_4': 0.5164274153919042, 'METEOR': 0.32303834602466674, 'CIDEr': 0.6902345960625933, 'ROUGE_L': 0.6622619741489134, 'SPICE': 0.08053052885011674}
Training time 0:04:53

Hi, thank you for your great job! I'm reproducing MSRVTT captioning results using the fine-tuned weights you provided in the repo(mPLUG2_ MSRVTT_Caption.pth downloaded from the link), but I cannot get the result reported in the paper, and there is a huge gap. What problem could it be? Thanks! My results: {'Bleu_1': 0.2391483871053033, 'Bleu_2': 0.1397145198812077, 'Bleu_3': 0.08582614908051771, 'Bleu_4': 0.0554141450685924, 'CIDEr': 0.6409439525382706}

I also have the same problem, my results:

1000 {'Bleu_1': 0.8543654737978459, 'Bleu_2': 0.7378183593769512, 'Bleu_3': 0.6142508049621727, 'Bleu_4': 0.49356243584209786, 'METEOR': 0.32518479654815075, 'ROUGE_L': 0.6626044480202753, 'CIDEr': 0.6884474233949733, 'SPICE': 0.08295830064784619}

nbbb avatar Jul 28 '25 07:07 nbbb