CLIP
CLIP copied to clipboard
What is the best practice to train image with multiple captions/keywords
Hi, I am trying to finetune clip model on my own dataset. My data has multiple texts per image. My understanding is that I need to create each ground truth label to ensure the positive relationship between the image and its every single text. Otherwise, they may be considered as negative pair during the training. Is this correct? Any sample code or post related to this concern?
My second question is that the texts may include image captions, keywords, categories, etc. Do I need to handle them differently? Many thanks!