pytorch-i3d icon indicating copy to clipboard operation
pytorch-i3d copied to clipboard

Don't you accumulate the validation gradients too while training?

Open burak43 opened this issue 5 years ago • 2 comments

in train_i3d.py file, you do loss.backward() for both train and val phases. Doesn't it accumulate gradients for the validation loss too no matter you put the model in eval mode (since it only affects the behaviour of some layers such as dropout, batch norm)? Is there pytorch 0.3.0 specific thing that blocks validation gradient accumulation?

burak43 avatar Feb 24 '20 19:02 burak43

These lines: https://github.com/piergiaj/pytorch-i3d/blob/master/train_i3d.py#L115-L119

Only apply the gradient step when in training model. Combined with https://github.com/piergiaj/pytorch-i3d/blob/master/train_i3d.py#L86 the gradients from the validation step are never applied.

For efficiency, the loss.backward() could be removed from the validation step, but since they are never applied, it will not impact model accuracy.

piergiaj avatar Feb 24 '20 20:02 piergiaj

These lines: https://github.com/piergiaj/pytorch-i3d/blob/master/train_i3d.py#L115-L119

Only apply the gradient step when in training model. Combined with https://github.com/piergiaj/pytorch-i3d/blob/master/train_i3d.py#L86 the gradients from the validation step are never applied.

For efficiency, the loss.backward() could be removed from the validation step, but since they are never applied, it will not impact model accuracy.

I see. Then, as I said in https://github.com/piergiaj/pytorch-i3d/issues/44#issuecomment-590037573, when num_steps_per_update is not a multiple of len(dataloader), the leftover accumulated training gradiens are zeroed before calling optimizer.step() when phase change from training to validation. As a result, leftover forward training pass losses are not used.

burak43 avatar Feb 25 '20 05:02 burak43