keras
keras copied to clipboard
loss tracker for multiple model outputs
Currently, only the total loss has been tracked and displayed. Can we have more detailed information for each loss?
I have a workaround to recompute the loss using metrics, but it seems very inefficient.
@james77777778 - Just wanted to confirm I captured the issue correctly.
import keras
from keras import layers
inputs = layers.Input(shape=(2,))
out1 = layers.Dense(1, activation=None, name="out1")(inputs)
out2 = layers.Dense(1, activation='sigmoid', name="out2")(inputs)
model = keras.Model(inputs=inputs, outputs=[out1, out2])
model.compile("sgd", ["mse", "binary_crossentropy"])
import numpy as np
x = np.random.random((10, 2))
y1 = np.random.random((10, 1))
y2 = np.random.randint(0, 2, (10, 1))
model.fit(x, [y1, y2], epochs=2)
Keras 2 Output:
Epoch 1/2
1/1 [==============================] - 1s 720ms/step - loss: 1.7548 - out1_loss: 0.9954 - out2_loss: 0.7594
Epoch 2/2
1/1 [==============================] - 0s 9ms/step - loss: 1.6991 - out1_loss: 0.9401 - out2_loss: 0.7590
Keras 3 Output:
Epoch 1/2
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 277ms/step - loss: 1.7056
Epoch 2/2
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 58ms/step - loss: 1.6477
Unlike Keras2, Keras 3 doesn't output the individual losses and will need to be added via metrics. @fchollet - WDYT? Should Keras3 also output individual losses?
This is documented as one of the differences in Keras 2 vs Keras 3 compatibility issues doc #18467 -
- When having multiple named outputs (for example named output_a and output_b, old tf.keras adds <output_a>_loss, <output_b>_loss and so on to metrics. Keras 3 doesn't add them to metrics and needs to be done them to the output metrics by explicitly providing them in metrics list of individual outputs.
So, the way forward is the explicitly add them to metrics unless @fchollet suggests otherwise -
model = keras.Model(inputs=inputs, outputs=[out1, out2])
model.compile("sgd", ["mse", "binary_crossentropy"], metrics=["mse", "binary_crossentropy"])
Epoch 1/2
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 752ms/step - loss: 1.6307 - out1_mse: 0.7819 - out2_binary_crossentropy: 0.8488
Epoch 2/2
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 41ms/step - loss: 1.5852 - out1_mse: 0.7376 - out2_binary_crossentropy: 0.8476
Hi @sampathweb
Yes, we can explicitly add them using metrics. However, this approach might be less efficient when the loss computation is expensive. It is also inconvenient to require the user to implement custom metrics.
For example, IoU loss and classification loss in an object detection model must be implemented both in Loss and Metric. The same computation occurs during training and when computing metrics.
I understand this works for simple (like the predefined losses) as metrics. What if I have, say, an intermediate result Tensor in some layer that I have added to the losses via tha Layer.add_loss API? How can I add this loss as a metric? Specifically, I can't just add a callable or define a custom Metric because the computation does take more than single tensors y_true and y_pred into account. Any way of adding a specific tensor to the metrics, similar to how add_metric used to work in the past?