CNN-RelationExtraction
CNN-RelationExtraction copied to clipboard
[Potential NAN bug] Loss may become NAN during training
Hello~
Thank you very much for sharing the code!
I try to use my own dataset ( with the same shape as mnist) in code. After some iterations, it is found that the training loss became NAN. After carefully checking the code, I found that the following code may trigger NAN in loss:
In CNN-RelationExtraction/CNN.py: line 104
cross_entropy = -tf.reduce_sum(self.y_ * tf.log(self.y_conv))
If y_conv contains 0 (output of softmax ), the result of tf.log(y_conv) is inf because log(0) is illegal . And this may cause the result of loss to become NAN.
It could be fixed by making the following changes:
cross_entropy = -tf.reduce_sum(self.y_ * tf.log(self.y_conv + 1e-8))
or
cross_entropy = -tf.reduce_sum(self.y_ * tf.log(tf.clip_by_value(self.y_conv,1e-8,1.0)))
Hope to hear from you ~
Thanks in advance! : )