CLIP_prefix_caption
CLIP_prefix_caption copied to clipboard
How to evaluate the trained model? Is there a test.py ?
I have the same question
May I ask how you solved this problem
you could consider https://github.com/salaniz/pycocoevalcap for evaluation
you could consider https://github.com/salaniz/pycocoevalcap for evaluation
thanks,but i have a question.Does the evaluation produce only one label instead of five?
change original data image1:(caption1,caption2,caption3,caption4,caption5) to image1:caption1 image1:caption2 image1:caption3 image1:caption4 image1:caption5
---- Replied Message ---- | From | @.> | | Date | 07/17/2023 16:57 | | To | @.> | | Cc | @.>@.> | | Subject | Re: [rmokady/CLIP_prefix_caption] How to evaluate the trained model? Is there a test.py ? (Issue #69) |
you could consider https://github.com/salaniz/pycocoevalcap for evaluation
thanks,but i have a question.Does the evaluation produce only one label instead of five?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Thank you for your reply. This format seems to be the format when extracting features with CLIP. My question is is this the same for inference validation evaluation, is the inference generated caption saved in this format, is the computation BLUE, etc.?