self-supervised-speech-recognition icon indicating copy to clipboard operation
self-supervised-speech-recognition copied to clipboard

finetune.py optimization.update_freq

Open TaridaGeorge opened this issue 4 years ago • 3 comments

I was wondering why in the finetune.py file you've set update_freq to be 24/NUM_GPU.

    cmd.append("+optimization.update_freq='[" + str(int(24/NUM_GPU)) + "]'")

In the wav2vec Readme https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/README.md they say that the base model was trained using 64 V100 GPUs and as I understood if we want to do more training on the base model we should simulate the number of the GPUs they've used.

Note: you can simulate 64 GPUs by using k GPUs and adding command line parameters (before --config-dir) distributed_training.distributed_world_size=k +optimization.update_freq='[x]' where x = 64/k

Have you found that setting update_freq to be 24/NUM_GPU is better for training or is it a bug?

TaridaGeorge avatar Mar 03 '21 13:03 TaridaGeorge

optimization.update_freq='[x]' where x = 64/k should belong to the pre-train step

mailong25 avatar Mar 03 '21 15:03 mailong25

And 24 should belong to finetuning? Is it 24 or 8? I saw that for the base model they've used 8 GPUs and for the large model 24.

TaridaGeorge avatar Mar 03 '21 16:03 TaridaGeorge

Yup! the number should follow the wa2vec repo instruction.

mailong25 avatar Mar 03 '21 17:03 mailong25