nolearn
nolearn copied to clipboard
Custom loss function using intermediate layer output and output layers
I need to create a loss function that uses values from a intermediate layer based on DeepID2.
Basically I have this network:
net = NeuralNet(
layers=[
('input', layers.InputLayer),
('conv1', Conv2DLayer),
('pool1', MaxPool2DLayer),
('dropout1', layers.DropoutLayer),
('conv2', Conv2DLayer),
('pool2', MaxPool2DLayer),
('dropout2', layers.DropoutLayer),
('conv3', Conv2DLayer),
('pool3', MaxPool2DLayer),
('dropout3', layers.DropoutLayer),
('conv4', Conv2DLayer),
('dropout4', layers.DropoutLayer),
('flatten1', layers.FlattenLayer),
('flatten2', layers.FlattenLayer),
('concat',layers.ConcatLayer),
('hidden4', layers.DenseLayer),
('output', layers.DenseLayer),
],
And I need to change the loss function to something like:
1 - Input two images file for each mini-batch
2 - If the images are from the same person (same ID) the loss function will be:
Loss_function:
Error from Image 1 (categorical_crossentropy[Image 1, ID1]) plus Error from Image 2 ( categorical_crossentropy[Image 2, ID1]) plus Error between the values of hidden4_layer for the Image 1 and Image 2 (Ex: get_output from hidden4 for image 1 and image 2 and apply squared_error )
3 - If the images are from different person:
Loss_function:
Error from Image 1 (categorical_crossentropy[Image 1, ID1]) plus Error from Image 2 ( categorical_crossentropy[Image 2, ID2])
Is possible to do this in nolearn?
It is definitely possible, though not out of the box. Have a look at the objective function in nolearn.lasagne.base. It is responsible for determining the loss. There you may specify a loss that depends on whatever layer(s) you wish. Then pass the new objective to nolearn's NeuralNet at initialization.
dfdf have you solved the problem? If yes i will appreciate if you share what have you learned. I have the same problem... Thanks in advance