unet icon indicating copy to clipboard operation
unet copied to clipboard

grey output predicted images

Open helwilliams opened this issue 6 years ago • 9 comments

Hello I am training ultrasound images and my predicted images are all grey and I get an error that test data is too low in contrast. Is there a way to solve this, will it require more data or bigger batch or higher steps per epoch? I have around 30 training images but after data augmentation it is around 2000 images.

Thanks in advance

helwilliams avatar Oct 18 '18 14:10 helwilliams

Hello I am training ultrasound images and my predicted images are all grey and I get an error that test data is too low in contrast. Is there a way to solve this, will it require more data or bigger batch or higher steps per epoch? I have around 30 training images but after data augmentation it is around 2000 images.

Thanks in advance i have the same problem so if you have some methods ,could you tell me ? thank

YangBai1109 avatar Oct 19 '18 08:10 YangBai1109

I have added data to my training but still get the same problem.. have you had any luck getting it to work @PanPan0210

helwilliams avatar Oct 23 '18 09:10 helwilliams

How many pixels is your "ground truth feature" size? For small feature it may be necessary to adjust your U-NET model to fewer layers (10 layers => 8 layers or 6 layers), otherwise the small feature disappears after several convolutions.

6 layers model:

conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)

conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
drop3 = Dropout(0.5)(conv3)

up4 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv3))
merge4 = concatenate([conv2,up4],axis=3)
conv4 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge4)
conv4 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)

up5 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv4))
merge5 = concatenate([conv1,up5],axis=3)
conv5 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge5)
conv5 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
conv5 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)

conv6 = Conv2D(1, 1, activation = 'sigmoid')(conv5)

teamo429 avatar Oct 25 '18 08:10 teamo429

thank you - @teamo429 it seemed to work, its still not a very nice network for my data but i'll have a play with settings. thanks

helwilliams avatar Oct 30 '18 09:10 helwilliams

thank you - @teamo429 it seemed to work, its still not a very nice network for my data but i'll have a play with settings. thanks

what is your play

alex337 avatar Mar 30 '19 13:03 alex337

@teamo429 I have two datasets, and your solution (with 8 layer) worked for the one, but not for the other. For the other dataset, I still got full gray images, while the datasets are really comparing to each other. Btw, how do you know, that I have to change the layer number? Thanks!

iambackit avatar Apr 03 '19 08:04 iambackit

Check the output numbers for your image! Maybe you could try multiply the images by 255. I got grey images before i did this.

quektc avatar Aug 28 '20 08:08 quektc

How many pixels is your "ground truth feature" size? For small feature it may be necessary to adjust your U-NET model to fewer layers (10 layers => 8 layers or 6 layers), otherwise the small feature disappears after several convolutions.

6 layers model: conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs) conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)

conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
drop3 = Dropout(0.5)(conv3)

up4 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv3))
merge4 = concatenate([conv2,up4],axis=3
conv4 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge4)
conv4 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)

up5 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv4))
merge5 = concatenate([conv1,up5],axis=3
conv5 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge5)
conv5 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
conv5 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)

conv6 = Conv2D(1, 1, activation = 'sigmoid')(conv5)

There's actually a bracket missing in your merge4&merge5 line. Not to be scrutinising or anything, just in case someone panics.

Fleufkens avatar Dec 17 '20 14:12 Fleufkens

There's actually a bracket missing in your merge4&merge5 line. Not to be scrutinising or anything, just in case someone panics.

Thanks for reminding, the previous reply has been updated.

teamo429 avatar Dec 21 '20 03:12 teamo429