tf_unet
tf_unet copied to clipboard
Own Weights
Hello,
I tried to build my own_weightmap with a function where the input are my labels (a binary matrix). The result is a float matrix with the size 10x256x256x2 (= batch_size x imageSizeX x imageSizeY x numClasses). Then I import my weights like this in the main.py:
net = unet.Unet(layers, features_root, ... , cost_kwargs=dict{class_weights=my_weights})
In "unet.py" with the function "_get_cost()" when we use the cross_entropy the following part of the if-statement should be used:
if class_weights is not None:
class_weights = tf.constant(np.array(class_weights, dtype=np.float32))
class_weights = tf.reshape(class_weights, [-1, self.n_class]) # This part is from me
weight_map = tf.multiply(flat_labels, class_weights)
weight_map = tf.reduce_sum(weight_map, axis=1)
loss_map = tf.nn.softmax_cross_entropy_with_logits(logits=flat_logits,
labels=flat_labels)
weighted_loss = tf.multiply(loss_map, weight_map)
loss = tf.reduce_mean(weighted_loss)
Now I'm receiving an error, because the "loss = nan". Do you have an idea what can be reason of this error? Or do you an example of using your own weights?
I was using the weights just to weight the different classes not single pixels.
I don't immediately see the issue with your implementation. I assume it works fine when weight_map consists of ones
Here is the description of the parameter https://github.com/jakeret/tf_unet/blob/master/tf_unet/unet.py#L213
thank u~~I've read that before, and I'd like to know an example~ If I have a binary label map, the size of which is [1000, 1000], and I want to set label 1 weight 0.9, and label 0 weight 0.1, what should I do?
This example goes in that direction: https://github.com/jakeret/tf_unet/blob/master/scripts/ufig_launcher.py#L55