Joel Akeret

Results 138 comments of Joel Akeret

1) the model state is stored after each epoch. The `path` returned by the trainer points to the exact location 2) this could indicate that the model struggled to learn...

Do you see the same behaviour when you run the toy model? By normalization I mean that the data has to be in the range of [0,1). In the ImageDataProvider...

This is great news! Happy to hear that Generally, in deep learning, more is alway better. Collect as much data as you can and then maybe also look into data...

I just realized that the documentation is not sufficiently accurate. If `n_class==2` then the implementation expects [n_samples, ny, nx, **1**] and it will transform it into a one-hot encoded tensor

Yes this is correct. [Then](https://github.com/jakeret/tf_unet/blob/master/tf_unet/image_util.py#L58) `self.n_class` will define how the labels are being processed ``` def _process_labels(self, label): if self.n_class == 2: nx = label.shape[1] ny = label.shape[0] labels =...

yes, you're right. Thats a bug... In the meantime you could do something like this: ``` data_provider = SimpleDataProvider(....) data_provider.n_class = 2 ```

Yes you're right. Have you checked if it makes a big difference? According to [deeplearning.stanford.edu/](http://deeplearning.stanford.edu/wiki/index.php/Backpropagation_Algorithm) :`Applying weight decay to the bias units usually makes only a small difference to the...

Sorry for the very late reply. The power of the regularizer is a hyperparameter like the batch size we can cover their relation in the param search instead of having...

Hi Mark thanks for the input. The dice loss has been bothering me for quite a while. Would you mind to share it (possibly in a Pull Request) so that...