CLIP_prefix_caption icon indicating copy to clipboard operation
CLIP_prefix_caption copied to clipboard

How to evaluate the trained model? Is there a test.py ?

Open liluying1996 opened this issue 1 year ago • 6 comments

liluying1996 avatar Jul 05 '23 11:07 liluying1996

I have the same question

rongtongxueya avatar Jul 06 '23 10:07 rongtongxueya

May I ask how you solved this problem

AlanLowe007 avatar Jul 13 '23 17:07 AlanLowe007

you could consider https://github.com/salaniz/pycocoevalcap for evaluation

baiyuting avatar Jul 16 '23 03:07 baiyuting

you could consider https://github.com/salaniz/pycocoevalcap for evaluation

thanks,but i have a question.Does the evaluation produce only one label instead of five?

rongtongxueya avatar Jul 17 '23 08:07 rongtongxueya

change original data image1:(caption1,caption2,caption3,caption4,caption5) to image1:caption1 image1:caption2 image1:caption3 image1:caption4 image1:caption5

---- Replied Message ---- | From | @.> | | Date | 07/17/2023 16:57 | | To | @.> | | Cc | @.>@.> | | Subject | Re: [rmokady/CLIP_prefix_caption] How to evaluate the trained model? Is there a test.py ? (Issue #69) |

you could consider https://github.com/salaniz/pycocoevalcap for evaluation

thanks,but i have a question.Does the evaluation produce only one label instead of five?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

baiyuting avatar Jul 19 '23 13:07 baiyuting

Thank you for your reply. This format seems to be the format when extracting features with CLIP. My question is is this the same for inference validation evaluation, is the inference generated caption saved in this format, is the computation BLUE, etc.?

rongtongxueya avatar Jul 20 '23 02:07 rongtongxueya