ssds.pytorch icon indicating copy to clipboard operation
ssds.pytorch copied to clipboard

All the ssds method are based on 300, can it changed into 500 or 512?

Open lucasjinreal opened this issue 6 years ago • 12 comments

Can the input image size be changed into another?

lucasjinreal avatar Nov 04 '18 02:11 lucasjinreal

there is no fc in the model so you can feed any size

1453042287 avatar Nov 23 '18 03:11 1453042287

You do need to change STEPS and SIZES variable in the config file so that you have the correct anchor sizes for 512 sized images

burhanmudassar avatar Feb 27 '19 18:02 burhanmudassar

@burhanmudassar How exactly do you change STEPS and SIZES variables?

You do need to change STEPS and SIZES variable in the config file so that you have the correct anchor sizes for 512 sized images

I changed the STEPS in order to keep them cover the entire image, then I scaled the previous SCALES dividing by 300 and multiplying by the dimension of my dataset. It seems that it is still getting the same poor increases from 0 mAP to 0.04 mAP in 400 epochs Is there something wrong in this procedure?

Thanks

kamauz avatar Mar 21 '19 13:03 kamauz

@kamauz can you get the desired result with the default setting?

1453042287 avatar Mar 24 '19 02:03 1453042287

@1453042287 @burhanmudassar

I can't get good result even with default settings. Is it possible to make it work with rectangular images (no resize)? Like 640x480 or 1920x1080?

kamauz avatar Mar 25 '19 13:03 kamauz

@jinfagang @burhanmudassar @1453042287 @kamauz i also want to change 300 to 512 or 500. are you succeed?

Damon2019 avatar Sep 17 '19 07:09 Damon2019

@jinfagang @burhanmudassar @1453042287 @kamauz i also want to change 300 to 512 or 500. are you succeed?

@Damon2019 I stopped working with this repository about 5 months ago. But as I remember now, it worked with squared pictures like 300x300, 512x512, 500x500 (and so on). Increasing the dimension it should perform better but slower in the execution and if you are interested I think I found a solution to work with rectangular sizes too:

  • when it gets the feature maps sizes, it considers two times the same dimension because it considers squared images by default. I didn't try to change the code but I suggest you to try

kamauz avatar Sep 17 '19 08:09 kamauz

@jinfagang @burhanmudassar @1453042287 @kamauz i also want to change 300 to 512 or 500. are you succeed?

I stopped working with this repository about 5 months ago. But as I remember now, it worked with squared pictures like 300x300, 512x512, 500x500 (and so on). Increasing the dimension it should perform better but slower in the execution and if you are interested I think I found a solution to work with rectangular sizes too:

  • when it gets the feature maps sizes, it considers two times the same dimension because it considers squared images by default. I didn't try to change the code but I suggest you to try

ok , I will try it.

Damon2019 avatar Sep 17 '19 08:09 Damon2019

@kamauz hi , i meet a problem about use pre-trained models and do not use pre-trained models. I don't know how the following parameters should be set.

TRAINABLE_SCOPE: 'base,norm,extras,loc,conf' TRAINABLE_SCOPE: 'norm,extras,loc,conf' Do you remember how you set these parameters? I will be very happy with any of your suggestions.

Damon2019 avatar Sep 20 '19 08:09 Damon2019

@kamauz 您好,我遇到有关使用预训练模型的问题,并且不使用预训练模型。 我不知道应如何设置以下参数。

TRAINABLE_SCOPE:'base,norm,extras,loc, conf'TRAINABLE_SCOPE:'norm,extras,loc,conf' 您还记得如何设置这些参数吗? 您的任何建议我都会非常满意。

@kamauz hi , i meet a problem about use pre-trained models and do not use pre-trained models. I don't know how the following parameters should be set.

TRAINABLE_SCOPE: 'base,norm,extras,loc,conf' TRAINABLE_SCOPE: 'norm,extras,loc,conf' Do you remember how you set these parameters? I will be very happy with any of your suggestions.

i have a same problem,can you tell me how to solve this problem,thanks

QZ-cmd avatar Jul 10 '20 04:07 QZ-cmd

@kamauz 您好,我遇到有关使用预训练模型的问题,并且不使用预训练模型。 我不知道应如何设置以下参数。

TRAINABLE_SCOPE:'base,norm,extras,loc, conf'TRAINABLE_SCOPE:'norm,extras,loc,conf' 您还记得如何设置这些参数吗? 您的任何建议我都会非常满意。

@kamauz hi , i meet a problem about use pre-trained models and do not use pre-trained models. I don't know how the following parameters should be set.

TRAINABLE_SCOPE: 'base,norm,extras,loc,conf' TRAINABLE_SCOPE: 'norm,extras,loc,conf' Do you remember how you set these parameters? I will be very happy with any of your suggestions.

i have a same problem,can you tell me how to solve this problem,thanks

I left this repository and took the official tensorflow repo to run the training phase of my project. I'm sorry but I just had an idea of how to solve: the method that generates the feature maps seems to generate a squared map by default

kamauz avatar Jul 10 '20 05:07 kamauz

Not sure for the older version in the master. But for the dev branch code, defintely it can used varient image sizes with different apect ratio.

But for some detector head, since the upsample size is force to 2 times larger to make the conversion easily in onnx and tensorrt, there are some limitation for the input image size. For example, for yolov3 or fpn detection heads, the 19201080 input size has dims different error in concat layer. In this case, the input size needs to be adjust to 19201088.

Please try the dev branch and let me know if you still has this issue. Thanks

foreverYoungGitHub avatar Jul 13 '20 15:07 foreverYoungGitHub