Joel Akeret
Joel Akeret
Thanks for reporting this. I'm just wondering why the output_map gets so large
@weiliu620 thanks for the hint. I'm going to look into this
@weiliu620 following the lines from [here](https://gist.github.com/raingo/a5808fe356b8da031837) refered in your [SO question](https://stackoverflow.com/questions/36850531/per-pixel-softmax-for-fully-convolutional-network) we would just have to subtract the result of `tf.reduce_max` in the `tf.exp` call, right?
sorry for the late reply. I'm wondering if something is not quite right with e size cropping of the labels prior to computing the cross entropy
I was using the weights just to weight the different classes not single pixels. I don't immediately see the issue with your implementation. I assume it works fine when `weight_map...
Here is the description of the parameter https://github.com/jakeret/tf_unet/blob/master/tf_unet/unet.py#L213
This example goes in that direction: https://github.com/jakeret/tf_unet/blob/master/scripts/ufig_launcher.py#L55
Hi @rubbyaworka this code hasn't been maintained for quite a while. There is a Tensorflow 2.0 compatible reimplementation of `tf_unet` available here: https://github.com/jakeret/unet
Sorry for the late reply. Maybe this is still of help. The reshaping looks a bit convoluted. Adding a new axis can be done like this `test_data = img[np.newaxis,...]` As...
I'm not quite sure what is going on but shouldn't your GT be of the shape [1,522,522,2]?