OFA
OFA copied to clipboard
Custom Dataset Fine-Tuning with Image Captioning
Hii Everyone,
I am trying to finetune the OFA large model on the custom dataset. It doesn't contains the cached tokens. So during the Stage-1 finetuning I removed the --eval-cider
argument and successfully completed the training.
On the Stage-2 , I did the same and also removed --scst-cider
argument and started the training. During the time of initialization it raises FileNotFoundError: [Errno 2] No such file or directory: 'coco-train-words.p'
My training configuration is:
python ../../train.py \
$data \
--selected-cols=${selected_cols} \
--bpe-dir=${bpe_dir} \
--user-dir=${user_dir} \
--restore-file=${restore_file} \
--reset-optimizer --reset-dataloader --reset-meters \
--save-dir=${save_path} \
--task=${task} \
--arch=${arch} \
--criterion=${criterion} \
--batch-size=${batch_size} \
--update-freq=${update_freq} \
--encoder-normalize-before \
--decoder-normalize-before \
--share-decoder-input-output-embed \
--share-all-embeddings \
--layernorm-embedding \
--patch-layernorm-embedding \
--code-layernorm-embedding \
--resnet-drop-path-rate=${resnet_drop_path_rate} \
--encoder-drop-path-rate=${encoder_drop_path_rate} \
--decoder-drop-path-rate=${decoder_drop_path_rate} \
--dropout=${dropout} \
--attention-dropout=${attention_dropout} \
--weight-decay=0.01 --optimizer=adam --adam-betas="(0.9,0.999)" --adam-eps=1e-08 --clip-norm=1.0 \
--lr-scheduler=polynomial_decay --lr=${lr} --end-learning-rate=2e-7 \
--max-epoch=${max_epoch} --warmup-ratio=${warmup_ratio} \
--log-format=simple --log-interval=10 \
--fixed-validation-seed=7 \
--no-epoch-checkpoints --keep-best-checkpoints=1 \
--save-interval=1 --validate-interval=1 \
--save-interval-updates=500 --validate-interval-updates=500 \
--best-checkpoint-metric=loss \
--max-src-length=${max_src_length} \
--max-tgt-length=${max_tgt_length} \
--find-unused-parameters \
--freeze-encoder-embedding \
--freeze-decoder-embedding \
--add-type-embedding \
--scale-attn \
--scale-fc \
--scale-heads \
--disable-entangle \
--num-bins=${num_bins} \
--patch-image-size=${patch_image_size} \
--memory-efficient-fp16 \
--fp16-scale-window=512 \
--num-workers=0 > ${log_file} 2>&1
Kindly help me to sort out the issue.
Thank you in Advance.
@Srikeshram Stage-2 finetuning means CIDEr optimization, if you want to perform CIDEr optimizationon a custom dataset, you should create xxx-train-words.p
yourself.
How could I create that one? @logicwong
How could I create that one?
Would you please suggest me the steps?
@logicwong
@Srikeshram You should calculate the ref_len
and document_frequency
of the training dataset. Please refer to the evaluation code snippet1 and snippet2.
Can I have a sample file xxx_words.p
file to look into it? @logicwong
@Srikeshram You can download this
@logicwong Thank you for sharing the file. I could see the contents. Can you please elaborate the procedure to generate the document frequency for the captions that are present in the custom dataset ?
@Srikeshram What the batch_size you can set when finetuning OFA-large on your custom captioning dataset? The max batch_size less than 10 in 32G V100 with my code.
Hello, I have been trying to finetune with Image-Captioning, but I find myself at a loss in this progress. Could you suggest some tutorial for it?
Maybe you can try the repository,I succeed to use transformers to train OFA model and inference: https://github.com/yangjianxin1/OFA-Chinese