flowtron
flowtron copied to clipboard
Batch size?
As stated in the paper 8 GPUs were used for training the models. As the batch size in config is set to 1 this means that the batch size for each gradient step is 8 right? So when training on 1 V100 GPU is it recommended to have a batch size of 8? (using gradient accumulation as it does not fit in memory)
Yes! It's worth trying and can lead to lower validation losses. Let us know what your findings are.
Yes! It's worth trying and can lead to lower validation losses. Let us know what your findings are.
Is there any prior guidance for setting batch_size to 1? I found that for my data, batch_size can be set to 16.
@IsakWesterlundBitville Thanks for your comment, I tried your suggestion to compare training with batch size 4 against default batch size 1 when using 2 GPUs. However, when I use batch size 4, I just get the the diagonal line of the alignment till a much smaller decoder time step compared with the alignment obtained with batch size 1. Please could you tell if you changed any other config settings to take into account the change in batch size.
@IsakWesterlundBitville Thanks for your comment, I tried your suggestion to compare training with batch size 4 against default batch size 1 when using 2 GPUs. However, when I use batch size 4, I just get the the diagonal line of the alignment till a much smaller decoder time step compared with the alignment obtained with batch size 1. Please could you tell if you changed any other config settings to take into account the change in batch size.
![]()
@rafaelvalle Do you have any idea why I get this difference in the alignment plots?
The two samples are different, hence the alignment plots are different.