CoOp
CoOp copied to clipboard
regression
Thanks for sharing this code. Was curious if you had considered how to handle a regression task?
I thought I might try a few ideas out, perhaps starting with a simple percentage label. Like [V]1 [V]2 ... [V]M 40%
but was curious if you had tried this or had any intuitions.
Not sure if turning CLIP into a regression model would work because CLIP was pretrained using a contrastive learning loss, which is significantly different from a regression objective
But a straightforward way to try is to train a single prompt, e.g., "V_1 ... V_M target value", using a regression loss like the mean square error loss (so the multiplication of the resulting text features with a single image's features would lead to a continuous value)
@dribnet hi, just a follow-up because I'm also interested to know if CLIP works for regression tasks, any update to share?
I ended up not taking this route but instead adapted the dataset to be classes (positive / negative).
I did find a more detailed read of issues trying to use percentage type labels as I had originally proposed in gwerns GPT-3 writeup in the calibration section.
interesting! thanks