CLIP_prefix_caption
CLIP_prefix_caption copied to clipboard
inference
when I run the generate_beam for the caption,there are many space in the caption,Do you know why, thank you.
Have you cheacked your training data? Your training captions may contain extra spaces which leads to this.
there is not space in train data.And when inference, it's just need the clip_embedding tensor,not need tha caption.isn't it?
I find that not all of your training caption end with the '.', since the end token for beamsearch is '.', thus the model may not know when to end inference but keeps predcting the 'space' till reach the max inference length. Yes, there's no need to input the caption when inference, i mean the training data will influence your model's ability when inference. Maybe you can process the training caption to all end with '.' and re-training your model to have a try?
I find that not all of your training caption end with the '.', since the end token for beamsearch is '.', thus the model may not know when to end inference but keeps predcting the 'space' till reach the max inference length. Yes, there's no need to input the caption when inference, i mean the training data will influence your model's ability when inference. Maybe you can process the training caption to all end with '.' and re-training your model to have a try?
thank you. I will try it.
I got the same situation, may I ask that if this problem is solved through "process the training caption to all end with '.'" ?
I really want to know how you generate this caption, I am new to this direction, do not know how you run the result, I follow the readme run, it seems that I can not evaluate the model and predict the caption
I really want to know how you generate this caption, I am new to this direction, do not know how you run the result, I follow the readme run, it seems that I can not evaluate the model and predict the caption
https://github.com/thandal/CLIP_prefix_caption