tf_unet icon indicating copy to clipboard operation
tf_unet copied to clipboard

Why the minibatch loss so strange

Open rangerli opened this issue 7 years ago • 3 comments

here is my code:

from tf_unet import unet, util, image_util

#preparing data loading search_path = 'data/train/*.tif' data_provider = image_util.ImageDataProvider(search_path)

#setup & training net = unet.Unet(layers=4, features_root=64, channels=data_provider.channels, n_class=2) trainer = unet.Trainer(net, optimizer='adam') path = trainer.train(data_provider, './unet_trained', training_iters=64, epochs=100)

There are 16000 images(500*500) in my datasets. Running the code: image here is the result,i feel so confused. Can u give me some advice to code? Thanks a lot.

rangerli avatar Mar 16 '18 08:03 rangerli

Hmm odd that the loss is the same for all mini batches. Does it change after a few iterations? Do you get the same result in every time you run it? I would try it with a smaller network and dataset to get a feeling what is not working correctly.

jakeret avatar Mar 17 '18 11:03 jakeret

@jakeret Thanks for your reply, the loss for mini batches will eventually decreased to the 176.7524. I have got the same results every time with different layers and feature roots.

rangerli avatar Mar 20 '18 07:03 rangerli

The loss is unnaturally high and should decrease every epoch. Given that the loss is always the same, independet of the net architecture I suspect that something might no be ok with the input data. It might be worth checking what data_provider(1) returns. Does the data look like what you expect? Is it within reasonable range?

jakeret avatar Mar 22 '18 07:03 jakeret