lightly
lightly copied to clipboard
DirectCLR: Understanding Dimensional Collapse in Contrastive Self-supervised Learning
DirectCLR: Understanding Dimensional Collapse in Contrastive Self-supervised Learning
18.10.2021 https://arxiv.org/abs/2110.09348 https://github.com/facebookresearch/directclr
Similar model as SimCLR but does not require a projection head and instead calculates the loss only on a subset of the embedding dimensions. The main focus of the paper is on dimensional collapse and it uses SVD to monitor collapse during training. Outperforms SimCLR with a single layer projection head.
Estimated effort to implement in Lightly: Low
- Add DirectCLR loss
- Add SVD to monitor dimensional collapse
I can take a look at this and let you know. Are you guys hiring new grads remotely? I would be excited to work with you guys.
Hi @Atharva-Phatak thank you for your valuable contributions and your interest in working with us 🙂 Please make sure to check our jobs page (https://www.lightly.ai/jobs) for information about openings at Lightly.
We can split this issue into separate pull-requests to manage the work load. One for the DirectCLR loss and one for monitoring dimensional collapse with SVD. What do you think?
@philippmwirth That sounds good. Let me take a look at DIRECTCLR paper and the implementation that they have released. I took a look at job opening, you are only hiring senior engineers unfortunately 😭 .
@philippmwirth Quick question, will this PR also require implementing DIRECT-CLR head ? Also do we have a working implementation of InfoNCELoss ??
I took a look at job opening, you are only hiring senior engineers unfortunately 😭 .
That's unfortunate! Maybe you can keep an eye on the openings and see if something comes up... 🙂
Quick question, will this PR also require implementing DIRECT-CLR head ? Also do we have a working implementation of InfoNCELoss ??
Feel free to split the work the way it suits you best (you can also do all in one). Regarding the implementation of the InfoNCELoss
see this here.