YOLOv3-complete-pruning
YOLOv3-complete-pruning copied to clipboard
WARNING: non-finite loss, ending training
Hello. When I am doing quantizing with the dataset "oxford hand",
python3 train.py --data data/oxfordhand.data --batch-size 32 --accumulate 1 --weights weights/yolov3.weights --cfg cfg/yolov3-quantize-hand.cfg
the prediction of the model is always -inf and the loss comes to nan. I wonder what's happening and how to modify it? I used yolov3.weights and weights/darknet53.conv.74 as the pre-trained models and the results are always the same.
I fount that when the w_bit and a_bit is set to be 16, the output will be -inf in quantized conv when running the forward():
_def forward(ctx, input, nbit):
scale = 2 ** nbit - 1
return torch.round(input * scale) / scale_
And when the bits are set to be 8, the code can be run as usual.
Actually, when the a-bit is 16, the scale is 65535.