tf_unet
tf_unet copied to clipboard
size of output images don't match with original paper
Joel,
I am currently working on your UNet architecture with my own dataset (images 500 x 376, according to your git topics https://github.com/jakeret/tf_unet/issues/6). I have several questions. Do you have time to share on them ?
On UNet paper the proposed architecture had 5 layers with a convolution filter 3x3 and 64 features. If I compute the size of output image with the previous observation and my own dataset, I have in theory an output size of 308 x 184. In your code, you compute an offset which corresponds to the "real" offset equal to 192. But when I'm using your code I find a size of 308 x 180.
In the same way, when I'm trying to use a different size of convolution filter with different network depth (for instance 3 layers), the output size is not equal to the theoric size.
I start fixing this problem by changing the offset definition to take into account the number of border pixels with a variable size of convolution filter. So far, I think your code looks like only working with 3x3 convolution and it's necessary to re-compute the offset. May be I am wrong or I missed a step. For instance you have writen on unet.py :
line 95 and 130 : size -=4 line 99 or 129 : size *= 2
I think these lines are correct only if the max-pooling is 2x2 and with a 3x3 convolution.
I try to understand how and why the network produce this gap size between theory and experiment.
I'm working on Python 3.5 and TensorFlow 0.12 GPU.
I'd be happy to share with you !
I think you might be right. The values shouldn’t be hardcoded but rather derived from the parameters. I have to check how the filter size and the striding affects the offset. By how much is the offset changed when using a MaxPooling > 2?
It also seems that the offset is not taking into account the last convolution.
I'll work on this as soon as I find some time. Any input is of course very welcome J
Hi Joel,
Sorry for time, I was very busy theses last weeks.
Can you give me necessary permissions to create a new branch and push my code on your repo Git ?
In your create_conv_net function, I add code to compute theoric (as original paper) number of convolution filters / pooling. It can be used to have the gap between the output size and the "real" size attempted.
I made other modification related to tensorboard for network visibility (just add some scopes).
Currently the problem is still not solved but me code can be used as starting point to fix it.
Best,
Jordan.
You can send me a pull request so that I can merge your changes into the official repository
Fine thank you.
With me code you can try to design a network with 2 layers and visualize it on TensorBoard. I think there is an error on design of descending path. According to the paper, input images only fed the first layer of convolution, and then the output of the second convolution fed the input of the second layer, etc... But with the current architecture, input images fed directly all the layers. Do you think that is a possible reason of the gap with theoric size ?
Why do you think this is happening? In line 98 the in_node gets overwritten in every iteration
I push my code with pull request, sorry for the last mistake. I'm quite confuse with Git. You are write, the in_node is overwritten in each step... I thought that because with tensorboard we have inputs link with all layers. But after check it, input are link by Dropout and not directly.
Do you have an idea for a solution ?
No worries. I had a quick look and I think things get a bit tricky as soon as one uses a filter size > 3. It's certainly doable but I might need a bit more time