Joel Akeret
Joel Akeret
The unet is often trained with a batch size of one. However it would be relatively easy to extend the data providers to have normalization
I see. I had a quick look at the BN - the data provider is obviously the wrong place. Couldn't you adapt the [`layers.conv2d`](https://github.com/jakeret/tf_unet/blob/master/tf_unet/layers.py#L35) method to call `tf.nn.moments` and then...
Great to hear that this is of use to you! The way the network works you will always get a small sized image. If you look thru older issues people...
@AlibekJ agreed, this seems to raise a lot of confusion. How about a new section (e.g. 'Data handling') in the documentation that would address this and other data related topics....
Thanks for the contribution. I'm wondering if your implementation should not rather go into a package for its own. What do you think?
Sorry for the late reply. Maybe you have found a solution in the meantime. Anyway, I'm a bit unsure about `weighted_loss = tf.multiply(loss_map, class_weights[:, 0]+class_weights[:, 1])` Also the `image_util.ImageDataProvider` is...
What I meant was that I don't understand the implemenation It's [shuffling](https://github.com/jakeret/tf_unet/blob/master/tf_unet/image_util.py#L164) the file names.
hi @ashahba , thank you for your contribution. I wasn't aware that this repo is being used in IntelAI benchmarks, nice. I hadn't merged #202 because of two reasons -...
I'm not sure if I understand the question. What exactly would you like to do?
The `Average loss` seems to get smaller (at least in the two epochs listed). Have you checked how the curves look like in Tensorboard? Do they remain on constant? `layers=3,...