EfficientDet.Pytorch icon indicating copy to clipboard operation
EfficientDet.Pytorch copied to clipboard

Model can't overfit

Open tienthegainz opened this issue 5 years ago • 7 comments

I use EfficientDet-D0 to train on my own dataset and got poor results. Much worse than even YOLOV3. So I put a simple test to see if the model can overfit with a single datapoint. But it doesn't.
mAP
Test only on 1 single data point with 'ghe_an' object. After 30 epochs, loss is 2.6. I wonder what is the problem here?

tienthegainz avatar Jan 14 '20 08:01 tienthegainz

In my environment, the cls_loss would not decrease no matter how many epochs run, it always is 2.302124500274658.

yrd241 avatar Jan 14 '20 10:01 yrd241

same issue

trulyjupiter avatar Jan 15 '20 02:01 trulyjupiter

In my environment, the cls_loss would not decrease no matter how many epochs run, it always is 2.302124500274658.

Yes, 300 epoch trained model does.

mad-fogs avatar Jan 15 '20 04:01 mad-fogs

Try to change the strides of the 5th and 7th block from s22 back to s11, and extract the last layers of 1/2/3/5/7 block. Codes are here and here Remember feature size halved in here, so the 1/2/3/5/7 block has feature map with 1/2, 1/4, 1/8, 1/16, 1/32 of original input size.

basaltzhang avatar Jan 16 '20 02:01 basaltzhang

Try to change the strides of the 5th and 7th block from s22 back to s11, and extract the last layers of 1/2/3/5/7 block. Codes are here and here

Hi, @basaltzhang can you elaborate extract the last layers of 1/2/3/5/7 block.

gmvidooly avatar Jan 17 '20 07:01 gmvidooly

@gmvidooly here's code

    def extract_features(self, inputs):
        # Stem
        x = self._swish(self._bn0(self._conv_stem(inputs)))

        P = []
        index = 0
        num_repeat = 0

        # Blocks
        for idx, block in enumerate(self._blocks):
            drop_connect_rate = self._global_params.drop_connect_rate
            if drop_connect_rate:
                drop_connect_rate *= float(idx) / len(self._blocks)
            x = block(x, drop_connect_rate=drop_connect_rate, idx=idx)
            num_repeat = num_repeat + 1
            if(num_repeat == self._blocks_args[index].num_repeat):
                if index in {0, 1, 2, 4, 6}:
                    P.append(x)
                num_repeat = 0
                index = index + 1

        return P

basaltzhang avatar Jan 19 '20 02:01 basaltzhang

deleting the line 'classification = torch.clamp(classification, 1e-4, 1.0 - 1e-4)' in focal loss seems to solve the problem

wonderingboy avatar Apr 13 '20 08:04 wonderingboy