CLIP_prefix_caption icon indicating copy to clipboard operation
CLIP_prefix_caption copied to clipboard

Evaluate problem

Open Twilighter9527 opened this issue 2 years ago • 4 comments

It's a very nice work,can you provide the code for evaluation it? I don't know how to evaluate it. Thank you!

Twilighter9527 avatar Mar 23 '22 07:03 Twilighter9527

Hi @wytszhuzhu, we have used the Oscar evaluation code to evaluate over COCO

I Hope this is helpful

rmokady avatar Mar 24 '22 22:03 rmokady

oh! thank you very much!!!

------------------ 原始邮件 ------------------ 发件人: "rmokady/CLIP_prefix_caption" @.>; 发送时间: 2022年3月25日(星期五) 上午6:21 @.>; 抄送: "pl0t-k ???? ?? @.@.>; 主题: Re: [rmokady/CLIP_prefix_caption] Evaluate problem (Issue #33)

Hi @wytszhuzhu, we have used the Oscar evaluation code to evaluate over COCO

I Hope this is helpful

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

Twilighter9527 avatar Mar 25 '22 01:03 Twilighter9527

Hello! I have some questions about settings. I notice that the default training script does not use normalize_prefix: python train.py --only_prefix --data ./data/coco/oscar_split_ViT-B_32_train.pkl --out_dir ./coco_train/ --mapping_type transformer --num_layres 8 --prefix_length 40 --prefix_length_clip 40 While the infernce code, transformer_inference.ipynb, uses normalize_prefix:

prefix = prefix / prefix.norm(2, -1).item()

So, for COCO caption dataset, which is better, normalize prefix or not? Thanks!

JaireYu avatar Apr 12 '22 15:04 JaireYu

@wytszhuzhu -- I've added basic eval support in my branch: https://github.com/thandal/CLIP_prefix_caption ''' python3 train.py --only_prefix --data data/coco/oscar_split_ViT-B_32_val.pkl --out_dir ./coco_evaluate_local_2/ --mapping_type transformer --num_layers 8 --prefix_length 40 --prefix_length_clip 40 --model_path=models/transformer/coco_prefix-009.pt --evaluate ''' Its definitely a little clunky, but I also have a few scripts to massage the results into the type of files used by coco-eval.

Unfortunately, I haven't been able to reproduce the results in the paper yet!

val_karpathy: {'bleu': [0.6574263790124814, 0.46054883354763715, 0.3079421708904345, 0.20412719661970102], 'cider': 0.5766721258280709, 'meteor': 0.23068635129255388, 'rouge': 0.46257322083431907, 'spice': 0.15591030077420115, 'wmd': 0.5299810088210446} test_karpathy: {'bleu': [0.6548157984748666, 0.4597153137744317, 0.3103062426657983, 0.2078126593622194], 'cider': 0.5843739503309977, 'meteor': 0.23188037724901417, 'rouge': 0.4614305714563363, 'spice': 0.15830000031265373, 'wmd': 0.5291680687998455}

thandal avatar Sep 13 '22 21:09 thandal