Aobo Yang
Aobo Yang
@Hossein-1991 First of all, based on your code, you just converted your tensors to `long` for your model, not Captum: `y = model(ids = x.long(), masks = y.long())` So if...
@Hossein-1991 Please pay attention to this part of your log ```py [/usr/local/lib/python3.8/dist-packages/transformers/models/bert/modeling_bert.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 1010 head_mask =...
Hi, could you elaborate more, like any issues you encountered? Any pseudo code to show us what you are trying to achieve? If you are trying to understand which tokens...
yup, that's my imagined use case. The above pseudo code gives a valid high-level flow of how to craft your forward_func/model. The error you encountered seems just to be an...
@MARUD84 The code looks great to me. What is the exact issue you have? Is it the error you mentioned "TypeError: len() of a 0-d tensor"? If so, you may...
hi @vedal , it depends on your need. Captum, as a tool itself, is not opinionated. It is feasible to use either way. Generally, "with Softmax" is more reasonable for...
@vedal what you said makes sense. But as Captum is trying to be a generic library for all kinds for models & problems, such an "append_softmax" option will likely only...