Aobo Yang

Results 27 comments of Aobo Yang

@dajtmullaj sorry for our late reply. Captum does not require your model to be classification. You can pass in any customized `forward_fun` whose inputs are the features you want to...

@Luckick Sorry for the late reply! This is great question. But unfortunately, this is a known limitation of Captum right now. Captum assumes you provides a single baseline whose values...

Signal processing is not a common use case for us. But did you encounter any specific difficulties when using captum? let us know if you think it is difficult to...

@marc-gav `input_indices` is the word token **IDs**, which are integers. They have no gradients. They will be mapped into embeddings in Bert, and this operation is not differentiable. You cannot...

So basically `IntegratedGradients` expects your `forward_fun` to output a tensor as target to calculate gradients against. We don't have any assumptions of your model architecture or purpose. So return the...

@saxenarohit thanks for reminding us. We will add it soon

hi @Dongximing , it should be the embedding layer of your model. As a token is discrete, its backpropagate gradient stop at its embedding. For Llama2, it would something like...

@ThinkInFuture we do not have such plan right now. Are you facing any specific issues to use Captum with "audio mir model"? How is the "audio mir model" different from...

@berkuva could you paste the stack trace of the error and the code for context?

@Hossein-1991 can you paste the error stack trace for me to confirm where the error is thrown? You are using `IntegratedGradients` but your input `tokens`, word ids, are not used...