unet icon indicating copy to clipboard operation
unet copied to clipboard

About train, validation and test

Open jizhang02 opened this issue 5 years ago • 11 comments

Hello, to someone may be concerned, In the U-Net, the original code only have training(trainGenerator) and predicting(predict_generator), So I wonder how to set training, validation and testing? Thanks to anyone who knows this answer!

jizhang02 avatar May 26 '19 09:05 jizhang02

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

jizhang02 avatar May 26 '19 13:05 jizhang02

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

Am curious how you defined the callbacks for tensorboard and history. If you don't mind can you share?

deaspo avatar May 26 '19 14:05 deaspo

tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0,write_graph=True, write_images=False) this is to define tensorboard,and you need to import relevant packages. history = LossHistory() this is to define history, but LossHistory() is a class, which is to use draw curve based on log file. So this class is just to record the content of log file.

jizhang02 avatar May 26 '19 14:05 jizhang02

tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0,write_graph=True, write_images=False) this is to define tensorboard,and you need to import relevant packages. history = LossHistory() this is to define history, but LossHistory() is a class, which is to use draw curve based on log file. So this class is just to record the content of log file.

Thanks!

deaspo avatar May 26 '19 14:05 deaspo

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

jcarta avatar Jun 11 '20 22:06 jcarta

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

it is the similar with trainGene, but no data augmentation part.

jizhang02 avatar Jun 15 '20 07:06 jizhang02

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

it is the similar with trainGene, but no data augmentation part.

Something like this?

`def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):

image_datagen = ImageDataGenerator()
mask_datagen = ImageDataGenerator()

image_generator = image_datagen.flow_from_directory(
    val_path,
    classes = [image_folder],
    class_mode = None,
    color_mode = image_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = image_save_prefix,
    seed = seed)

mask_generator = mask_datagen.flow_from_directory(
    val_path,
    classes = [mask_folder],
    class_mode = None,
    color_mode = mask_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = mask_save_prefix,
    seed = seed)

train_generator = zip(image_generator, mask_generator)

for (img, mask) in train_generator:
    img, mask = adjustData(img,mask, flag_multi_class, num_class)
    yield (img, mask)`

jcarta avatar Jun 21 '20 16:06 jcarta

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

it is the similar with trainGene, but no data augmentation part.

Something like this?

`def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):

image_datagen = ImageDataGenerator()
mask_datagen = ImageDataGenerator()

image_generator = image_datagen.flow_from_directory(
    val_path,
    classes = [image_folder],
    class_mode = None,
    color_mode = image_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = image_save_prefix,
    seed = seed)

mask_generator = mask_datagen.flow_from_directory(
    val_path,
    classes = [mask_folder],
    class_mode = None,
    color_mode = mask_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = mask_save_prefix,
    seed = seed)

train_generator = zip(image_generator, mask_generator)

for (img, mask) in train_generator:
    img, mask = adjustData(img,mask, flag_multi_class, num_class)
    yield (img, mask)`

yes 👍

jizhang02 avatar Jun 21 '20 21:06 jizhang02

@Ahgni - just checking whether the above idea worked correctly?

gganes3 avatar Jun 22 '20 13:06 gganes3

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

it is the similar with trainGene, but no data augmentation part.

Something like this? `def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):

image_datagen = ImageDataGenerator()
mask_datagen = ImageDataGenerator()

image_generator = image_datagen.flow_from_directory(
    val_path,
    classes = [image_folder],
    class_mode = None,
    color_mode = image_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = image_save_prefix,
    seed = seed)

mask_generator = mask_datagen.flow_from_directory(
    val_path,
    classes = [mask_folder],
    class_mode = None,
    color_mode = mask_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = mask_save_prefix,
    seed = seed)

train_generator = zip(image_generator, mask_generator)

for (img, mask) in train_generator:
    img, mask = adjustData(img,mask, flag_multi_class, num_class)
    yield (img, mask)`

yes 👍

I have one last question: what did you set batch_size to? Is this the same as the training generator or is it best to set it to 1?

jcarta avatar Jun 28 '20 21:06 jcarta

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

it is the similar with trainGene, but no data augmentation part.

Something like this? `def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):

image_datagen = ImageDataGenerator()
mask_datagen = ImageDataGenerator()

image_generator = image_datagen.flow_from_directory(
    val_path,
    classes = [image_folder],
    class_mode = None,
    color_mode = image_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = image_save_prefix,
    seed = seed)

mask_generator = mask_datagen.flow_from_directory(
    val_path,
    classes = [mask_folder],
    class_mode = None,
    color_mode = mask_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = mask_save_prefix,
    seed = seed)

train_generator = zip(image_generator, mask_generator)

for (img, mask) in train_generator:
    img, mask = adjustData(img,mask, flag_multi_class, num_class)
    yield (img, mask)`

yes 👍

I have one last question: what did you set batch_size to? Is this the same as the training generator or is it best to set it to 1?

The batchsize is the same as the training generator. If you wonder, you can compare different batchsize to see the results.

jizhang02 avatar Jun 29 '20 13:06 jizhang02