transformers icon indicating copy to clipboard operation
transformers copied to clipboard

An example for finetuning FLAVA or any VLP multimodel using trainer (for example for classification)

Open Ngheissari opened this issue 1 year ago • 2 comments

Feature request

There is no example of finetuning any VLP model using trainer. I would appreciate an example

Motivation

The way to use trainers with any Vision and Language pretrained model is not clear.

Your contribution

None.

Ngheissari avatar Jul 07 '22 21:07 Ngheissari

Hi,

Notebooks for FLAVA will soon be available in https://github.com/NielsRogge/Transformers-Tutorials.

You can find already some tutorials here: https://github.com/apsdehal/flava-tutorials.

cc @apsdehal

NielsRogge avatar Jul 08 '22 07:07 NielsRogge

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Aug 07 '22 15:08 github-actions[bot]

The issue was auto marked as closed, but there aren't yet any resources on how to fine-tune FLAVA. Neither of the links posted above by @NielsRogge have instructions on fine-tuning.

I'm posting to also express my interest on this.

jorgemcgomes avatar Aug 23 '22 11:08 jorgemcgomes

are there any finetuning tutorials yet in 2024?

daanishaqureshi avatar Mar 13 '24 15:03 daanishaqureshi