unet
unet copied to clipboard
About train, validation and test
Hello, to someone may be concerned, In the U-Net, the original code only have training(trainGenerator) and predicting(predict_generator), So I wonder how to set training, validation and testing? Thanks to anyone who knows this answer!
hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history])
I solved the problem by writing this way. Hope helpful to someone.
hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history])
I solved the problem by writing this way. Hope helpful to someone.
Am curious how you defined the callbacks for tensorboard and history. If you don't mind can you share?
tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0,write_graph=True, write_images=False)
this is to define tensorboard,and you need to import relevant packages.
history = LossHistory()
this is to define history, but LossHistory() is a class, which is to use draw curve based on log file. So this class is just to record the content of log file.
tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0,write_graph=True, write_images=False)
this is to define tensorboard,and you need to import relevant packages.history = LossHistory()
this is to define history, but LossHistory() is a class, which is to use draw curve based on log file. So this class is just to record the content of log file.
Thanks!
hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history])
I solved the problem by writing this way. Hope helpful to someone.
What does your validGene implementation look like?
hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history])
I solved the problem by writing this way. Hope helpful to someone.What does your validGene implementation look like?
it is the similar with trainGene, but no data augmentation part.
hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history])
I solved the problem by writing this way. Hope helpful to someone.What does your validGene implementation look like?
it is the similar with trainGene, but no data augmentation part.
Something like this?
`def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):
image_datagen = ImageDataGenerator()
mask_datagen = ImageDataGenerator()
image_generator = image_datagen.flow_from_directory(
val_path,
classes = [image_folder],
class_mode = None,
color_mode = image_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = image_save_prefix,
seed = seed)
mask_generator = mask_datagen.flow_from_directory(
val_path,
classes = [mask_folder],
class_mode = None,
color_mode = mask_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = mask_save_prefix,
seed = seed)
train_generator = zip(image_generator, mask_generator)
for (img, mask) in train_generator:
img, mask = adjustData(img,mask, flag_multi_class, num_class)
yield (img, mask)`
hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history])
I solved the problem by writing this way. Hope helpful to someone.What does your validGene implementation look like?
it is the similar with trainGene, but no data augmentation part.
Something like this?
`def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):
image_datagen = ImageDataGenerator() mask_datagen = ImageDataGenerator() image_generator = image_datagen.flow_from_directory( val_path, classes = [image_folder], class_mode = None, color_mode = image_color_mode, target_size = target_size, batch_size = batch_size, save_to_dir = save_to_dir, save_prefix = image_save_prefix, seed = seed) mask_generator = mask_datagen.flow_from_directory( val_path, classes = [mask_folder], class_mode = None, color_mode = mask_color_mode, target_size = target_size, batch_size = batch_size, save_to_dir = save_to_dir, save_prefix = mask_save_prefix, seed = seed) train_generator = zip(image_generator, mask_generator) for (img, mask) in train_generator: img, mask = adjustData(img,mask, flag_multi_class, num_class) yield (img, mask)`
yes 👍
@Ahgni - just checking whether the above idea worked correctly?
hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history])
I solved the problem by writing this way. Hope helpful to someone.What does your validGene implementation look like?
it is the similar with trainGene, but no data augmentation part.
Something like this? `def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):
image_datagen = ImageDataGenerator() mask_datagen = ImageDataGenerator() image_generator = image_datagen.flow_from_directory( val_path, classes = [image_folder], class_mode = None, color_mode = image_color_mode, target_size = target_size, batch_size = batch_size, save_to_dir = save_to_dir, save_prefix = image_save_prefix, seed = seed) mask_generator = mask_datagen.flow_from_directory( val_path, classes = [mask_folder], class_mode = None, color_mode = mask_color_mode, target_size = target_size, batch_size = batch_size, save_to_dir = save_to_dir, save_prefix = mask_save_prefix, seed = seed) train_generator = zip(image_generator, mask_generator) for (img, mask) in train_generator: img, mask = adjustData(img,mask, flag_multi_class, num_class) yield (img, mask)`
yes 👍
I have one last question: what did you set batch_size to? Is this the same as the training generator or is it best to set it to 1?
hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history])
I solved the problem by writing this way. Hope helpful to someone.What does your validGene implementation look like?
it is the similar with trainGene, but no data augmentation part.
Something like this? `def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):
image_datagen = ImageDataGenerator() mask_datagen = ImageDataGenerator() image_generator = image_datagen.flow_from_directory( val_path, classes = [image_folder], class_mode = None, color_mode = image_color_mode, target_size = target_size, batch_size = batch_size, save_to_dir = save_to_dir, save_prefix = image_save_prefix, seed = seed) mask_generator = mask_datagen.flow_from_directory( val_path, classes = [mask_folder], class_mode = None, color_mode = mask_color_mode, target_size = target_size, batch_size = batch_size, save_to_dir = save_to_dir, save_prefix = mask_save_prefix, seed = seed) train_generator = zip(image_generator, mask_generator) for (img, mask) in train_generator: img, mask = adjustData(img,mask, flag_multi_class, num_class) yield (img, mask)`
yes 👍
I have one last question: what did you set batch_size to? Is this the same as the training generator or is it best to set it to 1?
The batchsize is the same as the training generator. If you wonder, you can compare different batchsize to see the results.