keras
keras copied to clipboard
model trains on images but does not apply on the same images
Hello. It seems KERAS does train this model on grayscale 128x128 images (of three types of living cells) alright.
Building Model using Sequential
model = Sequential()
Convolution 1
model.add(Conv2D(32, kernel_size=(3,3), activation = "relu", input_shape = (128, 128, 3))) model.add(AvgPool2D(pool_size = (3,3))) model.add(BatchNormalization()) model.add(Dropout(0.3))
Convolution 2
model.add(Conv2D(64, kernel_size=(3,3), activation = "relu")) model.add(AvgPool2D(pool_size = (3,3))) model.add(BatchNormalization()) model.add(Dropout(0.3))
Flatten & Linear Fully Connected Layers
model.add(Flatten()) model.add(Dense(32, activation = "relu")) model.add(Dropout(0.3)) model.add(Dense(3, activation = "softmax"))
Compiling Model
model.compile(optimizer="adam", loss = "categorical_crossentropy", metrics = ["accuracy"])
Steps
train_steps = len(train_loader) test_steps = len(test_loader) train_metrics = model.fit_generator( generator = train_loader, steps_per_epoch = train_steps, epochs = 40, validation_data = test_loader, validation_steps = test_steps )
However, when I try to apply the trained model on one of the images, which was used for training: for example image 5
img = Image.open(mix_list[5]) img_arr = np.array(img) img_arr = img_arr[np.newaxis, :] img_arr = img_arr.astype("float") img_arr = img_gen.standardize(img_arr)
then, when it comes to extracting the probability
probability = model(img_arr)
KERAS reports ValueError: Exception encountered when calling layer 'sequential' (type Sequential). Input 0 of layer "conv2d" is incompatible with the layer: expected min_ndim=4, found ndim=3. Full shape received: (1, 128, 128)
What is this mysterious min_ndim=4
This does not play a role upon the training, but it conflicts when I try testing the trained model.
Perhaps, this is something about how I use "Image.open[...]" and/or "np.arr[....]"
Thank you.
Please, use fit
method instead of fit_generator
. The fit
method admits generators as input. If you are using grayscale images, you must specify the input shape as follows: input_shape=(128,128)
, you are telling to Keras that your inputs are RGB images (3 channels on last dimension).
Dear Emilio:thank you for the kind reply. The instruction to specify the input shape as input_shape=(128,128)is invalid: it crashes the code----> 4 model.add(Conv2D(32, kernel_size=(3,3), activation = "relu", input_shape = (128, 128)))ValueError: Input 0 of layer "conv2d_18" is incompatible with the layer: expected min_ndim=4, found ndim=3. Full shape received: (None, 128, 128)After I return to the format input_shape=(128,128, 3)which seems the only KERAS can take I try to run train_metrics = model.fit method( generator = train_loader, steps_per_epoch = train_steps, epochs = 60, validation_data = test_loader, validation_steps = test_steps) to receive another error train_metrics = model.fit method( ^ SyntaxError: invalid syntaxI do not think that KERAS is that simple,as one may think. With best wishes,Victor 22.01.2024, 14:11, "EMILIO DELGADO MUÑOZ" @.>: Please, use fit method instead of fit_generator. The fit method admits generators as input. If you are using grayscale images, you must specify the input shape as follows: input_shape=(128,128) , you are telling to Keras that your inputs are RGB images (3 channels on last dimension).—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.>
@VictorVVolkov , Please provide input shape as input_shape=(128,128,1) if you are using GreyScale image and as suggested in the above comment, use model.fit.
If you are not using Keras 3, install Keras 3 and import it directly and choose the backends of your choice from tensorflow, torch, jax.
More details here https://keras.io/guides/migrating_to_keras_3/
hey , try using fit method instead of fit_generator and expand the dimensions to achieve desire dimension , use tf.expand_dims(img_arra, 1) method or tf.expand_dims(img_arra, 1) method and change the chanel of RGB(e,g 3) to graystyle chanel that 1 in other word change input_shape = (128, 128, 3) to input_shape = (128, 128, 1)
This issue is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.
you need to add dim of batching , try with input_shape = (None, 128, 128, 3)
This issue is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.
This issue was closed because it has been inactive for 28 days. Please reopen if you'd like to work on this further.