swiftnet icon indicating copy to clipboard operation
swiftnet copied to clipboard

swiftnet

Open dxjundersky opened this issue 6 years ago • 3 comments

Thanks for sharing. I just trained the model with default parameters on Cityscape dataset.(1024*2048) The mIoU is 0.711 after 200 epochs. But the paper "In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images" says that the mIoU can arrive at 0.754. The author shared the code, but did not publish how to train. Can you reproduce the results in the paper?

dxjundersky avatar Nov 21 '19 07:11 dxjundersky

Thanks for sharing. I just trained the model with default parameters on Cityscape dataset.(1024*2048) The mIoU is 0.711 after 200 epochs. But the paper "In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images" says that the mIoU can arrive at 0.754. The author shared the code, but did not publish how to train. Can you reproduce the results in the paper?

In fact, my tf1 implementation‘s mIoU is around 0.71, and my tf2 implementation can reach around 73. And my training procedure is according to the paper. Manybe the pre-trained weight accounts for the result.

Katexiang avatar Dec 01 '19 06:12 Katexiang

Thanks for sharing. I just trained the model with default parameters on Cityscape dataset.(1024*2048) The mIoU is 0.711 after 200 epochs. But the paper "In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images" says that the mIoU can arrive at 0.754. The author shared the code, but did not publish how to train. Can you reproduce the results in the paper?

Can you tell me how to test the trainning result.Here is no eval.py or similar py file.

shifangtian avatar Dec 27 '19 07:12 shifangtian

My train code has contained the test process.

Katexiang avatar Jan 06 '20 02:01 Katexiang