semantic-segmentation-pytorch
semantic-segmentation-pytorch copied to clipboard
Will you release the results on CityScapes?
Great work!
I am wondering the reproduced performance on the CityScapes. It would be great if you could share the related results.
We're afraid that we're short-handed so we can't do that right now.
It shouldn't be hard. You may take a try.
@Tete-Xiao Thanks for your quick reply. I want to try the ResNet101, but it seems that the url you provide is valid .....
http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnet101-imagenet.pth
As the official use 7x7 conv instead of the setting like below:
self.conv1 = conv3x3(3, 64, stride=2)
self.bn1 = SynchronizedBatchNorm2d(64)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = conv3x3(64, 64)
self.bn2 = SynchronizedBatchNorm2d(64)
self.relu2 = nn.ReLU(inplace=True)
self.conv3 = conv3x3(64, 128)
self.bn3 = SynchronizedBatchNorm2d(128)
self.relu3 = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
@Tete-Xiao Thanks for your repo. I could reproduce the PSP to achieve 78.6 mIOU on the CityScapes with the hyperparamters you provide on my own code base.
@PkuRainBow Hi, could you share your training configuration? I try to train PSP on CIityScapes, but get 68 mIOU. I use crop instead of resizing the image and the crop size is 512*512. Thanks a lot!
@PkuRainBow Good to hear that! We're glad to know our toolbox is useful.
@tonysy My crop size is 769X769
@PkuRainBow Hi, could you share your training configuration, such as learning rate, data augmentation?
@boundles We are preparing for a new work, which is simple and performs better than PSP. We will release our paper and code in the future months. Please pay attention to my git repo.