EfficientDet.Pytorch
EfficientDet.Pytorch copied to clipboard
Model can't overfit
I use EfficientDet-D0 to train on my own dataset and got poor results. Much worse than even YOLOV3. So I put a simple test to see if the model can overfit with a single datapoint. But it doesn't.
Test only on 1 single data point with 'ghe_an' object. After 30 epochs, loss is 2.6. I wonder what is the problem here?
In my environment, the cls_loss would not decrease no matter how many epochs run, it always is 2.302124500274658.
same issue
In my environment, the cls_loss would not decrease no matter how many epochs run, it always is 2.302124500274658.
Yes, 300 epoch trained model does.
Try to change the strides of the 5th and 7th block from s22 back to s11, and extract the last layers of 1/2/3/5/7 block. Codes are here and here Remember feature size halved in here, so the 1/2/3/5/7 block has feature map with 1/2, 1/4, 1/8, 1/16, 1/32 of original input size.
Try to change the strides of the 5th and 7th block from s22 back to s11, and extract the last layers of 1/2/3/5/7 block. Codes are here and here
Hi, @basaltzhang can you elaborate extract the last layers of 1/2/3/5/7 block.
@gmvidooly here's code
def extract_features(self, inputs):
# Stem
x = self._swish(self._bn0(self._conv_stem(inputs)))
P = []
index = 0
num_repeat = 0
# Blocks
for idx, block in enumerate(self._blocks):
drop_connect_rate = self._global_params.drop_connect_rate
if drop_connect_rate:
drop_connect_rate *= float(idx) / len(self._blocks)
x = block(x, drop_connect_rate=drop_connect_rate, idx=idx)
num_repeat = num_repeat + 1
if(num_repeat == self._blocks_args[index].num_repeat):
if index in {0, 1, 2, 4, 6}:
P.append(x)
num_repeat = 0
index = index + 1
return P
deleting the line 'classification = torch.clamp(classification, 1e-4, 1.0 - 1e-4)' in focal loss seems to solve the problem