ultrasound-nerve-segmentation
ultrasound-nerve-segmentation copied to clipboard
Conv2DTranspose throwing error but UpSampling2D works fine
Using Keras 2.0.3, Theano 0.9, Python 3.5.
Images are 256 x 256 grayscale, with binary masks.
Channels first is ensured in Keras back end, appropriate axis=1
, using K.set_image_data_format('channels_first')
and inputs = Input((1, dim, dim))
.
When running the Unet before UpSampling2D
was changed to Conv2DTranspose
, the model trained without problems.
When running the Unet with Conv2DTranspose
, Theano throws the below error.
Is there any advantage to using Conv2DTranspose
?
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/theano/compile/function_module.py", line 884, in __call__
self.fn() if output_subset is None else\
ValueError: GpuCorrMM shape inconsistency:
bottom shape: 5 256 32 32
weight shape: 512 256 2 2
top shape: 5 512 16 16 (expected 5 512 17 17)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/.../src/train_binary.py", line 102, in <module>
train()
File "/Users/.../src/train_binary.py", line 77, in train
callbacks=[model_checkpoint, csv_log_cback])
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/keras/engine/training.py", line 1498, in fit
initial_epoch=initial_epoch)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/keras/engine/training.py", line 1152, in _fit_loop
outs = f(ins_batch)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/keras/backend/theano_backend.py", line 1158, in __call__
return self.function(*inputs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/theano/compile/function_module.py", line 898, in __call__
storage_map=getattr(self.fn, 'storage_map', None))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/theano/gof/link.py", line 325, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/six-1.10.0-py3.5.egg/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/theano/compile/function_module.py", line 884, in __call__
self.fn() if output_subset is None else\
ValueError: GpuCorrMM shape inconsistency:
bottom shape: 5 256 32 32
weight shape: 512 256 2 2
top shape: 5 512 16 16 (expected 5 512 17 17)
Apply node that caused the error: GpuCorrMM_gradInputs{half, (2, 2), (1, 1)}(GpuContiguous.0, GpuContiguous.0, TensorConstant{32}, TensorConstant{32})
Toposort index: 242
Inputs types: [CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, 4D), TensorType(int64, scalar), TensorType(int64, scalar)]
Inputs shapes: [(512, 256, 2, 2), (5, 512, 16, 16), (), ()]
Inputs strides: [(1024, 4, 2, 1), (131072, 256, 16, 1), (), ()]
Inputs values: ['not shown', 'not shown', array(32), array(32)]
Outputs clients: [[GpuSubtensor{int64:int64:int8, int64:int64:int8, int64:int64:int8, :int64:}(GpuCorrMM_gradInputs{half, (2, 2), (1, 1)}.0, ScalarFromTensor.0, ScalarFromTensor.0, Constant{1}, ScalarFromTensor.0, ScalarFromTensor.0, Constant{1}, Constant{0}, Constant{8}, Constant{1}, Constant{8})]]
Can you please post your code in which you used "UpSampling2D " ,
I am also getting the same error while using Conv2dTranspose
"ValueError: Error when checking target: expected conv2d_172 to have shape (None, 1, 320, 1) but got array with shape (286, 1, 314, 512) "
Using Keras 2.0.1, Theano 0.9, Python 3.4. I am using this topology
actual image size = 327 x 532 using this image size = rows, cols = 314, 512
inputs = Input((1, rows, cols))
zero = ZeroPadding2D((3,3))(inputs)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(zero)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv5)
up6 = concatenate([Conv2DTranspose(256, (3, 3), strides=(2, 2), padding='same')(conv5), conv4], axis=3)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(up6)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv6)
up7 = concatenate([Conv2DTranspose(128, (3, 3), strides=(2, 2), padding='same')(conv6), conv3], axis=3)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(up7)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv7)
up8 = concatenate([Conv2DTranspose(64, (3, 3), strides=(2, 2), padding='same')(conv7), conv2], axis=3)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(up8)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv8)
up9 = concatenate([Conv2DTranspose(32, (3, 3), strides=(2, 2), padding='same')(conv8), conv1], axis=3)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv9)
conv10 = Conv2D(1, (1, 1), activation='sigmoid')(conv9)
model = Model(inputs=[inputs], outputs=[conv10])
model.compile(optimizer=Adam(lr=1e-5), loss='sparse_categorical_crossentropy', metrics=['accuracy'])