tf_unet icon indicating copy to clipboard operation
tf_unet copied to clipboard

Cannot train with max pooling size other than 2

Open galaxyfanfanwu opened this issue 7 years ago • 3 comments

Hi, I noticed that the network cannot get trained with pool_size other than the default value of 2. The error message is: InvalidArgumentError (see above for traceback): input and filter must have the same depth: 96 vs 144 [[Node: up_conv_1/conv2d/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_conv_1/crop_and_concat/concat, up_conv_1/w1/read)]] I changed the numbers that are relevant to the max pooling size from 2-related to some other pool_size in unet.py as well as the definition of function 'deconv2d' in layers.py. But still, it didn't work... Are there some other ways to modify the pool_size in the code? Or it has to be 2 somehow? Thank you very much!

galaxyfanfanwu avatar Aug 13 '18 16:08 galaxyfanfanwu

Possible that there is a bug in the upconv or in the concatenation. Would be great if you could try to adapt the current implementation to see if is solves the problem.

jakeret avatar Aug 13 '18 18:08 jakeret

@jakeret Thanks Joel. If I set pool_size=3, for instance, the output_shape in 'deconv2d' is tf.stack([x_shape[0], x_shape[1]*3, x_shape[2]*3, x_shape[3]//3]). Is that correct? I'm not sure the last dimension should be x_shape[3] or x_shape[3]//3, but I tried both. As for 'crop_and_concat', I changed the offsets to be [0, (x1_shape[1] - x2_shape[1]) // 3, (x1_shape[2] - x2_shape[2]) // 3, 0]. After that the error message still showed up...

galaxyfanfanwu avatar Aug 13 '18 19:08 galaxyfanfanwu

I'm not sure where the bug is. Probably you have to step through the code and look at the tensor shapes

jakeret avatar Aug 14 '18 10:08 jakeret