Error with train_on_batch
I'm interested in your great job. I used datasets which image size is 217 x 181 and got this error. Do you have any idea??
Line 404 ipython-input-125-a3353993013f> in run_training_batch() 19 D_A_loss_real = model['D_A'].train_on_batch(x=real_images_A, y=ones) 20 D_B_loss_real = model['D_B'].train_on_batch(x=real_images_B, y=ones) ---> 21 D_A_loss_synthetic = model['D_A'].train_on_batch(x=synthetic_images_A, y=zeros) 22 D_B_loss_synthetic = model['D_B'].train_on_batch(x=synthetic_images_B, y=zeros) 23 D_A_loss = D_A_loss_real + D_A_loss_synthetic
ValueError: Error when checking input: expected input_85 to have shape (217, 181, 1) but got array with shape (220, 184, 1)
I only changed image_shape and image_folder values in line 32 def init(self, lr_D=2e-4, lr_G=2e-4, image_shape=(217, 181, 1), date_time_string_addition='', image_folder='T1-T2'):
Hi @kohheekyung, sorry for the late answer. Image sizes are tricky since the layers in the models automatically add padding, as default behaviour by keras, which results in changes in output sizes. This can often result in error. Try padding your images to size (220, 184, 1) before training.
@simontomaskarlsson
Image sizes are tricky since the layers in the models automatically add padding, as default behaviour by keras, which results in changes in output sizes.
can't we change the default behaviour? it would be nice if the code handles different image sizes. something like:
read user images
resize to a specific model size
or
read user images
model adapt to this size