End-to-end-CD-for-VHR-satellite-image
End-to-end-CD-for-VHR-satellite-image copied to clipboard
Question about training the model
I read your paper and the original of UNet++ and everything is clear to me apart from how I can I train the network using a new dataset. There is the option of deep supervision in which the nework takes the co-registered image pairs (each has size 256x256x6) concatenated and as output the [nestnet_output_1, nestnet_output_2, nestnet_output_3, nestnet_output_4, nestnet_output_5], with each output dimension of 256x256x1. I would like to reproduce your experiments with deep supervision. It is clear to me what the input should be, but I fail to understand what the output should be. Specifically, how can I produce these 4 output matrices. In the dataset from Lebedev only one grayscale image was used as output, and each pixel of this image was produced from subtracting the two input images.
Hi @mpegia did you solve this problem?