neural-api
neural-api copied to clipboard
Cross entropy learning
I hope I'm not too annoying but you guys are the experts in that area so I hope I can discuss another neat feature with you...
While browsing through the "Neural Networks for Pattern Recognition" from C. M. Bishop I recognized that there are more than the standard learning error propagation method with mean squared error but rather there is one called Cross Entropy loss function... There are a few sources that claim that this error/loss function would indeed allow faster learning progress....
What do you think? Would that be a viable feature for the library?