PSPNet icon indicating copy to clipboard operation
PSPNet copied to clipboard

will you provide the training script?

Open gaopeng-eugene opened this issue 8 years ago • 20 comments

gaopeng-eugene avatar Dec 24 '16 06:12 gaopeng-eugene

same

dongzhuoyao avatar Dec 25 '16 03:12 dongzhuoyao

I am also regrading to the question. Thanks

mjohn123 avatar Dec 25 '16 13:12 mjohn123

same

rickythink avatar Dec 26 '16 10:12 rickythink

Has anyone re-trained successfully?

huaxinxiao avatar Dec 26 '16 11:12 huaxinxiao

same

rener1199 avatar Dec 26 '16 13:12 rener1199

same

justinbuzzni avatar Dec 27 '16 09:12 justinbuzzni

Hi, for the training, the issues are mainly related to bn layer:

  1. Whether to update the parameters (mean,variance,slope,bias) of 'bn'? -If you are working on the same (or similar) dataset as the released model, you can just fix the 'bn' layer for fine tuning; if not, you may need to update the parameters.
  2. What may need more attention when updating parameters? -The batch size when doing batch normalization is important and it's better to set it above 16 in a calculation step for we need to keep the current mean and variance approximate to the global statistics that will be used in the testing step. While semantic segmentation is memory consuming and to maintain a larger crop size (related to certain dataset) may cause small batch size on each GPU card. So during our training step, we use MPI to gather data from different GPU cards and then do the bn operation. While it seems that current official Caffe doesn't support such communication. We are trying to make our training code compatible with BVLC and you can have a glance at Caffe vision of yjxiong which is a OpenMPI-based Multi-GPU version. If you are working on other datasets, maybe other platform can support such bn communication. Sorry for the inconvenient of the current released evaluation code. Thanks.

hszhao avatar Dec 28 '16 10:12 hszhao

@hszhao : Thanks for your information. I am working in same cityscapes dataset. I am using 1 GPU TitanX Pascal. Is it possible to run your training model in my computer? If not, could you reduce the Resnet depth layer such as 54? I also a beginner of caffer, so I do not know how can I make the training model from your first point

mjohn123 avatar Dec 28 '16 10:12 mjohn123

@hszhao I am training a 713 resolution pspnet on 2 x 12gb titan x with batch size 1, and it seems almost all memories are used.

So I guess training with batchsize 16 would require about 32 titan x cards (12gb memory) ?

I cannot find details about how many gpus are used in the paper, so I want to confirm that how many gpus are required to train with batchsize 16 according to your experience ?

I really wonder what is the quantitative performance improvement between batchsize 16 and batchsize 1, because in the paper and this thread you emphasize that batchsize matters yet in deeplab-v2 (and according to my own experience) training with batchsize 1 also works (to some extent). Do I really need to use batchsize 16 (and potentially 32 cards ?) to achieve ideal performance ? ...

Fromandto avatar Dec 29 '16 02:12 Fromandto

@Fromandto If your batchsize is 1, the batch normalization layer may be not work. However, the bn layer seems important to the performance of PSPNet.

huaxinxiao avatar Dec 29 '16 02:12 huaxinxiao

@huaxinxiao yes, this is exactly what i am concerned ... but I just don't have 32 gpus (or is there anything wrong with my setting so that 4 gpus are enough to train 16 batch ?)

Fromandto avatar Dec 29 '16 03:12 Fromandto

@Fromandto Smaller crop size (<321) will work in 4 gpus. Besides, you should use OpenMPI-based Multi-GPU caffe to gather the bn parameters.

huaxinxiao avatar Dec 29 '16 04:12 huaxinxiao

@Fromandto Could you share your training script?

suhyung avatar Jan 06 '17 08:01 suhyung

@suhyung I am using the training script of deeplab-v2. it is compatible.

Fromandto avatar Jan 11 '17 04:01 Fromandto

@Fromandto @hszhao Could you tell me some details for training? I'm using deeplab-v2 caffe, and I'm ready to train a model with my own python script. But I don't have any proper initial weights for pspnet101-VOC2012.prototxt. I tried to use the initial parameters from deeplab-v2, the layer names are different. Should I train a network for pre-trained model on ImageNet by myself?

SoonminHwang avatar May 12 '17 13:05 SoonminHwang

@SoonminHwang you can either transfer the weights over by matcaffe/pycaffe, or you can replace the ResNet part of the PSPNet prototxt with the DeepLab version. By the way, the PSPNet ResNet backbone is not structurally identical to the original ResNet. The first few convs are 3x3 here instead of 7x7. So you wouldn't be able to transfer the DeepLab weights over exactly. Therefore, easiest solution would be to use the original ResNet / DeepLab structure and then use their public weights to initialise.

qizhuli avatar May 18 '17 01:05 qizhuli

@SoonminHwang Did you have init weights file?

ThienAnh avatar Oct 19 '17 06:10 ThienAnh

What init weights should we use for training the cityscapes model for pspnet?

tkasarla avatar Dec 08 '17 12:12 tkasarla

@huaxinxiao can you train this pspnet with SyncBN?

melody-rain avatar Jan 11 '18 13:01 melody-rain

I've implemented sync batch normalization in pure tensorflow, which makes possible to train and reproduce the performance of PSPNet: batch norm across GPUs.

holyseven avatar Apr 23 '18 13:04 holyseven