deep-learning-with-python-notebooks
deep-learning-with-python-notebooks copied to clipboard
Data-augmentation generators not working with TensorFlow 2
I am trying to train model with data-augmentation generators on TensorFlow 2.0 using below code.
train_datagen = ImageDataGenerator(rescale=1. / 255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(train_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
But on first epoch, getting this error:
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
WARNING:tensorflow:From <ipython-input-18-e571f2719e1b>:27: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train for 100 steps, validate for 50 steps
Epoch 1/100
63/100 [=================>............] - ETA: 59s - loss: 0.7000 - accuracy: 0.5000 WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 10000 batches). You may need to use the repeat() function when building your dataset.
Please let me know how should I modify the above code base for TensorFlow 2 ?
Thanks Rahi
Same problem here :) @rahiakela did you manage to solve this in some way? As a workaround, I actually set the batch_size
to 20 again, so that steps_per_epoch * batch_size
is not more than 2000, which is the amount of examples in the train dataset. I increased the amount of epochs to 150 in order to "compensate"...
Yes, I was able to tackle it, you can see the working example of my notebook. https://github.com/rahiakela/deep-learning-with-python-francois-chollet/blob/5-deep-learning-for-computer-vision/2_training_convnet_from_scratch_on_small_dataset.ipynb
Nice! But you could obtain better performance by setting steps_per_epoch=100
, batch_size=20
and epochs=150
. This should make your curves less noisy. Look at my example here: https://github.com/lucone83/deep-learning-with-python/blob/master/notebooks/chapter_05/02%20-%20Using%20convnets%20with%20small%20datasets.ipynb
Thanks for suggestion...its worked as expected.