ijepa icon indicating copy to clipboard operation
ijepa copied to clipboard

Downstream task

Open ankan8145 opened this issue 1 year ago • 5 comments

After train the model can we use only target-encoder for down-stream task ?? like- image captioning etc.

ankan8145 avatar Oct 29 '23 12:10 ankan8145

You can use Encoder and not the target encoder for the task. Because during the training, we trained the encoder model to predict the masked regions given an unmasked context. Hence using encoder would be the choice!

VimukthiRandika1997 avatar Mar 20 '24 15:03 VimukthiRandika1997

@VimukthiRandika1997 I was thinking about this in a similar way although the paper says "We use the target-encoder for evaluation and average pool its output to produce a global image representation." You can check this on the first paragraph in the paper appendix (A.1. Pretraining). Can you please give a check on this?

FalsoMoralista avatar Apr 08 '24 18:04 FalsoMoralista

@VimukthiRandika1997 I was thinking about this in a similar way although the paper says "We use the target-encoder for evaluation and average pool its output to produce a global image representation." You can check this on the first paragraph in the paper appendix (A.1. Pretraining). Can you please give a check on this?

Yeah, I looked into that. I think in this case It makes sense to use Target Encoder for the evaluation. Main reason might be the target encoder can learn all possible semantics of within images given some image context(blocks). On the other hand, context encoder only learns how to represent the given image context.

I was mainly inspired by previous approach called BYOL, here we used Online Encoder(similar to context encoder) after training. We can try out Context Encoder and see the results as well since both checkpoints are available!

VimukthiRandika1997 avatar Apr 10 '24 16:04 VimukthiRandika1997

@VimukthiRandika1997 I was thinking about this in a similar way although the paper says "We use the target-encoder for evaluation and average pool its output to produce a global image representation." You can check this on the first paragraph in the paper appendix (A.1. Pretraining). Can you please give a check on this?

Yeah, I looked into that. I think in this case It makes sense to use Target Encoder for the evaluation. Main reason might be the target encoder can learn all possible semantics of within images given some image context(blocks). On the other hand, context encoder only learns how to represent the given image context.

I was mainly inspired by previous approach called BYOL, here we used Online Encoder(similar to context encoder) after training. We can try out Context Encoder and see the results as well since both checkpoints are available!

That makes a lot of sense, really nice intuition. They actually do test both approaches (also shown in the appendix) for reconstruction but personally I couldn't find the conclusions visually intuitive.

Ps: me and other folks are uniting to reproduce some of the experiments, mess around with the architecture, etc. If you want to join add me on discord: falsomoralista.

FalsoMoralista avatar Apr 10 '24 16:04 FalsoMoralista

hello @FalsoMoralista, I'm currently interested in pretraining IJEPA and finetuning on that pretrained model on semantic segmentation task, can I join with you? this is my discord info: spidartist

Spidartist avatar May 31 '24 12:05 Spidartist