segnet
segnet copied to clipboard
ValueError: total size of new array must be unchanged
On running segnet.py , I get the above error on line
autoencoder.add(Reshape((12,data_shape), input_shape=(12,360,480)))
Any solution to this ?
Try changing your image_dim_ordering
from tf
to th
: https://keras.io/backend/#kerasjson-details
@waissbluth ,this also doesn't seem to work
I get error at line autoencoder.add(Reshape((12,data_shape), input_shape=(12,360,480)))
According to me reason for this is -
During encoding, in one of the layers, input shape changes from (512,45,60)
to (512,22,30)
However during decoding , shape changes from (512,22,30)
to (512,44,60)
and so on...
So , in the end output shape which turns out to be (Some_dim,352,480)
does not remain same as the input shape, so reshaping does not work.
Yes, I believe that's right. After each pooling layer, the dimensionality is halved (stride of 2). For that to happen properly (and in order to be able to reverse it during upsampling), in this implementation the original size has to be a multiple of either 8 (3 pooling layers) or, more strictly, 16 if you uncomment the last MaxPooling2D layer in the end of the encoder, effectively adding a fourth pooling operation.
Unless you change that line, an input size of 360x480 should not cause any problems. Otherwise, I've found that reshaping all input to 352 x 480 works well enough.
I'm also encountering this error and I'm running with the theano backend:
± python segnet.py
Using gpu device 0: GeForce GTX 1080 (CNMeM is disabled, cuDNN 5105)
/usr/local/lib/python2.7/dist-packages/theano/sandbox/cuda/__init__.py:600: UserWarning: Your cuDNN version is more recent than the one Theano officially supports. If you see any problems, try updating Theano or downgrading cuDNN to version 5.
warnings.warn(warn)
Using Theano backend.
...............................................................................................................................................................................................................................................................................................................................................................................(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
> /home/ahundt/src/segnet/segnet.py(167)<module>()
165 autoencoder.add(Convolution2D(12, 1, 1, border_mode='valid',))
166 import ipdb; ipdb.set_trace()
--> 167 autoencoder.add(Reshape((12,data_shape), input_shape=(12,360,480)))
168 autoencoder.add(Permute((2, 1)))
169 autoencoder.add(Activation('softmax'))
ipdb>
somebody worked it out? i changed nothing in the code, running it results with this shape error. I tried to reshape the input to size of 352 x 480 as you say it worked for you @PavlosMelissinos @fateh288 but nothing changed for me, am i missing something?
Hey @yarin05, try plotting your network with:
from keras.utils import plot_model
plot_model(model, to_file='model.png', show_shapes=True)
to see if the output shapes in the encoder are mirrored in the decoder. It should help you locate the culprit.
@PavlosMelissinos @fateh288 I too have the same error at this point- autoencoder.add(Reshape((12,data_shape), input_shape=(12,360,480))). Can someone tell me which layer am I doing a mistake? The output shapes in the encoder are exactly same as in the decoder.
Regards
@pranitapradhan91 You've obviously mixed up the ordering of the dimensions. Keras by default uses nwhc whereas you have ncwh.
Just change autoencoder.add(Layer(input_shape=(3, 360, 480)))
to autoencoder.add(Layer(input_shape=(360, 480, 3)))
here.
Otherwise, you can run K.set_image_data_format("channels_first")
somewhere at the beginning of your code.
Thank you @PavlosMelissinos . This worked.
Any idea why batchNormalization gives this error? Has anyone encountered it before?
I am using Keras 2.0.8 and theano 0.10.0 on windows 10. I tried to downgrade theano version as suggested in some thread, but that also did not work.
I'm not using theano, sorry, try their github or the google usergroup