DIST_KD
DIST_KD copied to clipboard
The Segmentation of val mIoU is not 74.21 --->77.10,which is using DIST KD method based on DeepLabV3-ResNet18
Dear hunto: Recently,I had reproduce your paper's method,which is based on DIST KD with Cityscapes Segmentation.But I got worse result. My experiment is as follows: The parameters is based on https://github.com/hunto/DIST_KD/blob/main/segmentation/README.md Firstly, I run DIST KD method ,which i got the validation pixAcc: 95.867, mIoU: 77.542. secondly,I run without DIST KD method ,which i got the validation pixAcc: 95.745, mIoU: 76.311. So,I can not reproduce the mIoU 74.21 --->77.10,which is only 1% improvement based on my experiment. Here is my training log KD log deeplabv3_resnet101_resnet18_log_using_KD.txt
without KD log
deeplabv3_resnet101_resnet18_log_without_KD.txt
I'm looking forward your reply.Thanks
And the training py is : No KD train_without_kd.txt using KD train_kd.txt
Dear @songyang86 ,
Thanks for your detailed implementations.
Our code of semantic segmentation is based on the code in CIRKD, so we directly report the mIoU in CIRKD paper, and haven't trained the student without KD.
But for our DIST, we train the models using our code. I find that you obtained a higher mIoU compared to ours (77.54 vs. 77.10). I will retrain the student with and without DIST to check the results again.
Dear hunto: How about your the retrain results of DIST. Best wishs!
Hi @songyang86 ,
I've trained the DIST on deeplabv3 R101-R18 setting, and got a highest validation mIoU of 77.24. You can see the log below. deeplabv3_resnet101_resnet18_log.txt
Have you tested PSP-R18 model? Based on the code in this repo, I got mIoU=75.53, 75.37 in two independent runs with DIST KD, which is lower than the mIoU in the readme file.