Joel Akeret

Results 138 comments of Joel Akeret

@agrafix from a first glance at your code I guess it's ok. Wondering if this has something to do with issue #28

@panovr with deep learning there is typically not one correct answer. Tuning the hyperparameter is the "art" of this approach. To get started I would use 3-4 layers, 64 features,...

Most of the time I'm also working with grayscale images. Another thing you could experiment with is the `dice_coefficient` loss function instead of the default cross entropy. btw: cool data...

@amejs training a network with such a high class imbalance is very hard. A weighted loss or the dice coefficient loss function is probably only going to help marginaly. You...

@amejs in #28 it was reported: > Quick update. Found the issue. There is a bug in layers.py: In pixel_wise_softmax_2 and pixel_wise_softmax If the output_map is too large, then exponential_map...

Hard to tell what is going on. Have you experimented with some preprocessing e.g. data/batch normalization?

The data is automatically normalized to [0, 1). In this particular case I'm talking about [this](https://www.quora.com/Why-does-batch-normalization-help) kind of normalization (zero-mean and unit variance)

This class imbalance is certainly not helping. Someone reported here that this might also be caused by an overflow/underflow problem in the softmax layer (#28). Unfortunately, I haven't had the...

I suspect that the initial problem is in the computation of the [`pixel_wise_softmax`](https://github.com/jakeret/tf_unet/blob/master/tf_unet/layers.py#L61). I think it might worth investigating if changing the implementation is solving the issue. Instead of using...

You can load your data using the same data provider mechanism - so you ensure that the same pre-processing steps are being applied