alibi
alibi copied to clipboard
Integrated Gradient for Multi-Task Models
How can the integrated gradient explainer be used for a model with multiple outputs (e.g. a multi-task classification model where each output can have few possible values)? Can you please provide a working example? How can one choose to get attributions with respect to each of the outputs?
At the moment there's no option to select an output within the IntegratedGradient class, but If you have a keras or tf.keras model with multiple outputs, you can do something like that
from tensorflow.keras.models import Model
explanations = []
for out in model.output:
model_2 = Model(inputs=model.input, outputs=out)
ig = IntegratedGradients(model_2)
explanation = ig.explain(X)
explanations.append(explanation)
It's good to keep this issue open to understand whether it is convenient to add an option that allow to pass an output as a parameter.
I am getting a Graph disconnected error when I try to do this.
@senjed could you provide a summary view of what your model looks like (i.e. using the summary()
method) for the original and the new model?
We should have an example of this.
Hello; I have the same issue: I need to apply the Integrated gradients to a multi-task model with two inputs and two outputs. but I'm getting a Graph disconnected error when I try to create a loop .: explanations = [] out= new_model.output[0]: inp=new_model.inputs[0]: model_task = Model(inp, out) ig = IntegratedGradients(model_task) explanation = ig.explain(Z) explanations.append(explanation)
@nouna99 could you provide a short, self-contained example of the multi-task model with the error? This will help us debug the issue.
new_model=tf.keras.models.load_model(my_wd, custom_objects={'f1_score': f1_score, 'precision' : precision, 'recall' : recall, 'AdamWeightDecay' : optimizer})
new_model.summary()
from tensorflow.keras.models import Model predictions = pred_class explanations = [] for out in new_model.output: for inp in new_model.inputs: model_task = Model(inputs=inp, outputs=out) ig = IntegratedGradients(model_task) explanation = ig.explain(X_test_sample) explanations.append(explanation)
ValueError Traceback (most recent call last)
4 frames /usr/local/lib/python3.7/dist-packages/keras/engine/functional.py in _map_graph_network(inputs, outputs) 982 'The following previous layers ' 983 'were accessed without issue: ' + --> 984 str(layers_with_complete_input)) 985 for x in tf.nest.flatten(node.outputs): 986 computable_tensors.add(id(x))
ValueError: Graph disconnected: cannot obtain value for tensor KerasTensor(type_spec=TensorSpec(shape=(None, 114), dtype=tf.int32, name='input_word_ids2'), name='input_word_ids2', description="created by layer 'input_word_ids2'") at layer "keras_layer". The following previous layers were accessed without issue: []
@nouna99 is it possible to share the model architecture, how it was constructed?
Hello; can i share it in private with you please? Regards
Hi @nouna99 . Probably there's no need to share it in private. Would you mind share a minimal piece of code that reproduce the issue and that can be run as it is? It would be very helpful.
OK i will share it. I have a question about the actual version of IG : does it support multiple input and multiple output models?
At the moment it does support multiple inputs. The comment below suggests a workaround that might work for multiple outputs, but multiple outputs models are not natively supported at the moment.
At the moment there's no option to select an output within the IntegratedGradient class, but If you have a keras or tf.keras model with multiple outputs, you can do something like that
from tensorflow.keras.models import Model explanations = [] for out in model.output: model_2 = Model(inputs=model.input, outputs=out) ig = IntegratedGradients(model_2) explanation = ig.explain(X) explanations.append(explanation)
It's good to keep this issue open to understand whether it is convenient to add an option that allow to pass an output as a parameter.
Ok, thank you for your feedback. in this case what is the parameter target to pass to the explain function. Because in my case I have two outputs.
Since multiple outputs models are not supported at the moment, there's no target parameter to pass in case of multiple outputs. If the workaround I suggested (which is a workaround and might or might not work) does not work, please provide some minimal code that reproduces the issue so we can have a look at it and help you.
At the moment it does support multiple inputs. The comment below suggests a workaround that might work for multiple outputs, but multiple outputs models are not natively supported at the moment.
Hello; After using the workaround that you proposed, i get this error
X_test=[X1_test,X2_test] nb_samples = 10 X_test_sample = X_test[:nb_samples] Z1=np.asarray(X_test_sample[0]['input_word_ids']) Z2=np.asarray(X_test_sample[1]['input_word_ids']) Z=[Z1,Z2]
from tensorflow.keras.models import Model predictions = pred_class explanations = [] for out in new_model.output: model_task = Model(inputs=new_model.input, outputs=out) ig = IntegratedGradients(model_task) explanation = ig.explain(Z) explanations.append(explanation)
The error WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int32 WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int32 WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int32 WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int32
AssertionError Traceback (most recent call last)
6 frames /usr/local/lib/python3.7/dist-packages/keras/engine/functional.py in _run_internal_graph(self, inputs, training, mask) 557 for x in self.outputs: 558 x_id = str(id(x)) --> 559 assert x_id in tensor_dict, 'Could not compute output ' + str(x) 560 output_tensors.append(tensor_dict[x_id].pop()) 561
AssertionError: Could not compute output KerasTensor(type_spec=TensorSpec(shape=(None, 6), dtype=tf.float32, name=None), name='dense_2/Softmax:0', description="created by layer 'dense_2'")
My model is below :