deepimageinpainting icon indicating copy to clipboard operation
deepimageinpainting copied to clipboard

Own Training Data has shape 256,256,1

Open SvSz opened this issue 4 years ago • 0 comments

Hey Guys, it's me again.

I am still working on my DEM Data. As I already mentioned I inpainted my 32,32,1 data with your Partial Convolution Model. Results were satisfactory! This time I am using the same data set with bigger images

After Normalization, my Data looks like this:

hstio1000

x_train shape: (1000, 256, 256, 1) x_test shape: (100, 256, 256, 1) Minimum Value in Train: 0.0 Maximum Value in Train: 1.0 Minimum Value in Test: 0.0821 Maximum Value in Test: 0.6226

VT1000

Masks were created like before and have the value 1.0

VT1000sct

Histograms of a selection (sample_idx = 20) from masked images in traingen seem plausible:

traingen

My Major Changes:

class createAugment(keras.utils.Sequence):
def __init__(self, X, y, batch_size=8, dim=formatxy, n_channels=1, shuffle=True):
class InpaintingModel():
  kernel_size = 3
  kernel = (kernel_size, kernel_size)

  def prepare_model(self, input_size=256,256,1):

[...]
   conv15, mask15, conv16, mask16 = self.__decoder_layer(32, 3, conv14, mask14, conv1, mask1, ['conv15', 'decoder_output'])

   outputs = keras.layers.Conv2D(1, kernel, activation='sigmoid', padding='same')(conv16)

model.compile(optimizer='adam', loss='mean_absolute_error', metrics=[dice_coef], learning_rate=0.01)

The Model summary is always very long, i can only repeat that it looks equally to the initial one with 32,32,3 - i changed nothing else. the bottom says:

Total params: 4,769,039 Trainable params: 4,769,039 Non-trainable params: 0

Complete Version: IPC_VT-1000.txt

Here is the run:

Epoch 1/5
125/125 [==============================] - 161s 1s/step - loss: 0.1088 - dice_coef: 0.4870 - val_loss: 0.0252 - val_dice_coef: 0.3564
Epoch 2/5
125/125 [==============================] - 152s 1s/step - loss: 0.0236 - dice_coef: 0.5171 - val_loss: 0.0130 - val_dice_coef: 0.3601
Epoch 3/5
125/125 [==============================] - 152s 1s/step - loss: 0.0149 - dice_coef: 0.5232 - val_loss: 0.0129 - val_dice_coef: 0.3592
Epoch 4/5
125/125 [==============================] - 152s 1s/step - loss: 0.0138 - dice_coef: 0.5247 - val_loss: 0.0125 - val_dice_coef: 0.3596
Epoch 5/5
125/125 [==============================] - 153s 1s/step - loss: 0.0119 - dice_coef: 0.5237 - val_loss: 0.0111 - val_dice_coef: 0.3604

Here is the output:

Legend: Original Image | Mask generated | Inpainted Image | Ground Truth

results1

I like the results, because yet I did not manage a full 20 Epochs run. See below why that is. I have a problem with val_loss not dropping and the model does not learn.

Here is the whole Notebook: ipc_vt_1k_4.zip

Ofc my DEM-Data (VT-1000) is too big, say 150 MB to upload here.


Another Run with 10 Epochs and a Kernel Size of 5x5 gave even better results: graphs2

However when I repeat the run, even after restarting the Kernel - it might turn out the model does not learn a thing. (Results = Black Images with the same value on any pixel) Even when I did not change a thing. I use Google Colab's Cloud-computing GPUs. I am unsure if this can cause such problems. But this problem does indeed undermines the model's scientific repeatability. So if anyone is able to help me here I will be very thankfull!

I will continue on implementing scheduling learning rate to hopefully fix this.

Cheers

SvSz avatar Feb 25 '21 11:02 SvSz