Joel Akeret
Joel Akeret
The default value for batch size is 1 and for training_iters is 10. This means after only 20 epochs you have only used 200 of your input images. I would...
What is the value of [`n_class`](https://github.com/jakeret/tf_unet/blob/master/tf_unet/unet.py#L188) that you pass into the unet?
The package doesn't provide an out of the box solution for this. You could use the list of[variables](https://github.com/jakeret/tf_unet/blob/master/tf_unet/unet.py#L198) and pass and adapter version of it to the [minimize function](https://github.com/jakeret/tf_unet/blob/master/tf_unet/unet.py#L347)
https://github.com/jakeret/tf_unet/search?q=softmax&unscoped_q=softmax
The deeper the network is the smaller the output image will be. This is expected behaviour as described in the original Ronneberger et al. paper. One thing people do to...
You can pass an [array](https://github.com/jakeret/tf_unet/blob/master/tf_unet/unet.py#L228) that defines how much each class should be weighted. But take it with a grain of salt - possibly it's not the ideal rebalancing scheme
Yes exactly You can also try the dice loss if the dataset is unbalanced On Jul 20, 2018 12:59, "mateovilla2" wrote: Thank you for your answer . That means, for...
Yes two is [correct](https://github.com/jakeret/tf_unet/blob/master/tf_unet/unet.py#L432)
Thx for you contribution. I see why this is better during training. But how should we control the dropout during validation and prediction? There we want to set the dropout...
Right. So during training we want dropout to be < 1 and during [validation](https://github.com/jakeret/tf_unet/blob/master/tf_unet/unet.py#L466) it should be = 1. How can we control this?