littleWangyu
Results
1
issues of
littleWangyu
in train epoch: why optimizer.zero_grad() after optimizer.step()? Does it matter? It's usually optimizer.step()--loss.backward()--optimizer.step()