keras-io icon indicating copy to clipboard operation
keras-io copied to clipboard

multi-input single-output binary segmentation

Open Chiaradisanto opened this issue 3 years ago • 0 comments
trafficstars

eRY9s

I’m trying to implement this network. My dataset is composed by images and masks of left ventricle and I'm performing binary segmentation. I used TimeDistributedImageDataGenerator in order to create input shape with (3,3,128,128,1) where the last three dimensions are ( high,width, channels) and the first two dimensions are ( batch_size, time_steps). Then, I created my model

input_l = layers.Input(shape=(input_shape)) x = (layers.TimeDistributed(layers.Conv2D( 64, kernel_size=(3, 3),padding='same',strides=(1,1) ))) (input_l) conv2 = layers.TimeDistributed( layers.Conv2D( 64, kernel_size=(3, 3),padding='same',strides=(1,1) ) ) (x) x=layers.TimeDistributed(layers.MaxPooling2D(pool_size=(2,2)))(conv2) x = layers.TimeDistributed( layers.Conv2D( 128, kernel_size=(3, 3),padding='same',strides=(1,1) ) ) (x) conv5 = layers.TimeDistributed( layers.Conv2D( 128, kernel_size=(3, 3),padding='same',strides=(1,1) ) ) (x) x=layers.TimeDistributed(layers.MaxPooling2D(pool_size=(2,2)))(conv5) x = layers.TimeDistributed( layers.Conv2D( 256, kernel_size=(3, 3),padding='same',strides=(1,1) ) ) (x) conv8 = layers.TimeDistributed( layers.Conv2D( 256, kernel_size=(3, 3),padding='same',strides=(1,1) ) ) (x) x=layers.TimeDistributed(layers.MaxPooling2D(pool_size=(2,2)))(conv8) x=layers.Bidirectional(layers.ConvLSTM2D(256,kernel_size=(3,3),padding='same',strides=(1,1),return_sequences=True))(x) up1 = layers.TimeDistributed( layers.Conv2DTranspose( 512,kernel_size=(3,3),padding='same',strides=(2,2)))(x) concat1 = layers.concatenate([up1, conv8]) x = layers.TimeDistributed( layers.Conv2D( 256, kernel_size=(3, 3),padding='same',strides=(1,1) ) ) (concat1) x = layers.TimeDistributed( layers.Conv2D( 256, kernel_size=(3, 3),padding='same',strides=(1,1) ) ) (x) up2 = layers.TimeDistributed( layers.Conv2DTranspose( 256,kernel_size=(3,3),padding='same',strides=(2,2)))(x) concat2 = layers.concatenate([up2, conv5]) x = layers.TimeDistributed( layers.Conv2D( 128, kernel_size=(3, 3),padding='same',strides=(1,1) ) ) (concat2) x = layers.TimeDistributed( layers.Conv2D( 128, kernel_size=(3, 3),padding='same',strides=(1,1) ) ) (x) up3 = layers.TimeDistributed( layers.Conv2DTranspose( 128,kernel_size=(3,3),padding='same',strides=(2,2)))(x) concat3 = layers.concatenate([up3, conv2]) x = layers.TimeDistributed( layers.Conv2D( 64, kernel_size=(3, 3),padding='same',strides=(1,1) ) ) (concat3) x=layers.Bidirectional(layers.ConvLSTM2D(32,kernel_size=(3,3),padding='same',strides=(1,1),return_sequences=True))(x) out= layers.TimeDistributed( layers.Conv2D( 64, kernel_size=(1, 1),padding='same',strides=(1,1),data_format='channels_last' ) ) (x) #out = tf.reshape(out, (-1, 1, 256,256, 64)) out = layers.Conv2D( 1, kernel_size=(1, 1),padding='same',strides=(1,1), activation='sigmoid' ) (out)

model = models.Model(inputs=input_l, outputs=out) model.summary()

How could I change my model in order to have just one single image as output as the image suggest ( the central one) ? I think that the problem is related to the first dimension of the output (3) which must be equal to 1.I tried adding Flatten, Reshape and /or both Layers unsuccessfully. Any ideas?

Chiaradisanto avatar Jul 04 '22 09:07 Chiaradisanto