quantized.pytorch
quantized.pytorch copied to clipboard
crash during training
TRAINING - Epoch: [0][410/446] Time 0.602 (0.622) Data 0.000 (0.005) Loss 4.0999 (5.5282) Prec@1 2.344 (3.435) Prec@5 19.531 (14.536)
TRAINING - Epoch: [0][420/446] Time 0.602 (0.622) Data 0.000 (0.005) Loss 4.1251 (5.4952) Prec@1 3.906 (3.459) Prec@5 20.312 (14.664)
TRAINING - Epoch: [0][430/446] Time 0.611 (0.621) Data 0.000 (0.005) Loss 4.0770 (5.4635) Prec@1 3.125 (3.478) Prec@5 24.219 (14.813)
TRAINING - Epoch: [0][440/446] Time 0.600 (0.621) Data 0.000 (0.005) Loss 4.0965 (5.4331) Prec@1 7.031 (3.515) Prec@5 19.531 (14.948)
Traceback (most recent call last):
File "main.py", line 305, in
I encountered the same issue. I think this is because the data is not a multiple of (C * self.num_chunks). It does not happen until the last step of training where the batch size is a bit different
Seems to be a bug