Colorizing-with-GANs
Colorizing-with-GANs copied to clipboard
Test.py Input Shape Error
With either grayscale or color images test.py is giving an error. E.g. feeding a 184 x 274 grayscale image gives: ValueError: Cannot feed value of shape (1, 184, 274, 1, 3) for Tensor 'input_gray:0', which has shape '(?, ?, ?, 1)'
Thanks for your help!
Edit:
In dataset.py changed line 147 to: img = imread(path, mode='L') from img = imread(path). Now the shape is (1, 184, 274, 1).
This results in a second error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,512,3,5] vs. shape[1] = [1,512,4,6]
Thanks again!
After changing the line you mentioned in the dataset.py, mine test could run.
But my testing pics are 256*256.
Maybe changing the image scale would work.
Tried a 256*256 image and it ran.
To train on our own data with a new image size would we write a new class Custom_Model(BaseModel) in models.py which corresponds to the new data set?
Thanks!
@PeterVennerstrom
To train on our own data with a new image size would we write a new class Custom_Model(BaseModel) in models.py which corresponds to the new data set?
No, it's not that. The problem is with u-net architecture using strided convolution for downsampling and odd dimensions. Let's say your input dimension is (184 x 274), using 7 layers of strided convolution here's what you get in the encoder branch:
(184 x 274) -> (92, 137) -> (46, 69) -> (23, 35) -> (12, 18) -> (6, 9) -> (3, 5) -> (2, 3)
In the decoder brach we upsample with factor of 2 and concatenate with their corresponding encoder layer:
(2, 3) -> (4, 6) -> (8, 12) -> (16, 24) -> (32, 48) -> (64, 96) -> (128, 192) -> (256, 384)
And as you can see the decoder branch completely diverges and the reason is that at some point in the encoder branch, one of the dimensions of the output is an odd number!
To prevent that, you can either make sure that your input is a power of 2 (128, 256, 512, ...) or if your input size is fixed, adjust the convolution paddings in the encoder branch, to make sure an output dimension is always an even number!
Same problem leads to this issue. Probably should add this to the document or add a resize preprocessing procedure on test data?
Thanks for the clarification. Trained a model on the Kaggle Humpback Whale Identification data using (512 x 256) images.
https://imgur.com/a/nSEaGxa
Great work!