DiffAug
DiffAug copied to clipboard
Question about No unsupervised representation learning experiment
Hi Tianduo, I really appreciated your work in developing the learnable data augmentation for sentence representation learning. Your proposed method DiffAug has shown really good performance in semi-supervised and supervised settings.
However, I was wondering how is the performance of DiffAug on unsupervised settings.
- If you have already tried, did DiffAug still show better performance than SimCSE?
- If not, how do you think we first train the prefix on unsupervised contrastive learning (freeze the language model), and then jointly train the language model and prefix?