Yet-Another-EfficientDet-Pytorch
Yet-Another-EfficientDet-Pytorch copied to clipboard
normalized的问题
@zylo117 大佬,您好, 就是您的工程,在对图片处理的时候,是先做normalized的,然后在resize,如果原始图片很大的话就会很耗时间,不管是在train或者是inference上, 可以改成先resize然后在normalized的吗?会对模型产生很大的影响吗?
that's a good point. But the padding should be zeros instead of means, maybe you can perform normalization within aspectaware_resize.
感谢您的回复,不过aspectaware_resize这个是什么意思?意思是我在normalize后在把原resize的padding为0的地方在重新填为0 是吧?
yes. or you can skip the padding part and just normalize the rest.
恩恩,好的,明白了,谢谢了啊
@zylo117 大佬,您好.又有一个问题,请问pytorch模型在inference的时候,如果同时有多个进程或者线程在运行,inference的时间会慢很多,有什么方式可以减轻吗?运行环境是nvidia agx 开发板, 没有其他的程序运行的时候,d0模型,640的输入inference的时间在130ms左右,但是跑了其他程序以后,会从160ms到340ms之间震荡.
isn't that normal?
我找到我这里的原因了,就是我的inference作为一个线程和另外一个线程相互影响,如果改成进程,我试了就好很多了,影响没那么大了,谢谢您的回复
@zylo117 大佬,您好.我按照您说的那样, 先resize图片,但不padding,然后normalize,最后在padding,然后训练了.但是发现了一个问题.我在这个情况下训练了两次,但是这两次的模型在从未参与训练的数据上表现的很不一样,一个效果挺好的,但是另外一个效果不好,这两个模型在测试集上表现的都还可以的.为什么在新的数据上表现的不一样,为什么前后两次(什么也没改的)的训练效果,会有这样的差异啊? 我应该在训练上注意些什么?还有什么超参需要注意的?
Do you mean the model performs very differently on testset and valset? I think it's in early stages and can't perform stably. And you should visualize the loss graph to see if it is overfitting and use coco_eval to get a fair judgement.
恩恩,是不一样,在测试集差不多,但是在验证集上不行(没有标注的数据,我直接看的,主要是分类不准了,框都能框到的).还有我没有用预训练模型,直接就是用您的超参训练的. 没有用预训练模型是原来刚开始训练的时候,用了预训练模型,分类效果不好,box还行. 没有用的话,就是box不太准,但是分类准的
我都训练了100多个epcho,并且都是用的103 epcho那个模型验证的. 同时也验证了20, 40, 60, 90 epcho的模型,那个差的模型,都是一直验证集上表现的差的,那个好的,前面epcho的那些模型,分类都还行的.错误的少
Hello @zylo117big brother. I did as you said, first resize the picture, but not padding, then normalize, finally padding, and then training. But I found a problem. I trained twice in this situation, However, the performance of the two models on the data that has never participated in the training is very different, one effect is good, but the other effect is not good, the performance of these two models on the test set is OK. The performance of the new data is different. Why is there such a difference in the training effects of the two before and after (nothing changed)? What should I pay attention to in training? What other super parameters should be paid attention to?
Can you please share code for preprocessing for steps >>resize (no Padding)>>normalize>>padding ? How this way improve performance insted of resize >>Aug(flip H)>>normalize>>