interpret-text icon indicating copy to clipboard operation
interpret-text copied to clipboard

Can I use IntrospectiveRationaleExplainer to explain pre-trained model ?

Open nochimake opened this issue 1 year ago • 1 comments

Hello, I have a pre-trained model for text sentiment polarity classification, with a structure roughly composed of RoBERTa+TextCNN. Can I use the Introspective Rationale Explainer to interpret its output? I aim to obtain the importance/contribution of each word towards the final predicted polarity.

nochimake avatar Feb 19 '24 03:02 nochimake

@nochimake I would suggest to try Exaplainable AI (XAI), Explainable AI (XAI) aims to make the decision-making processes of machine learning models transparent and interpretable. Refer this:- https://github.com/explainX/explainx Through it's lime and shap libraries , it is possible to interpret the decesions of model through visualizations. You can also use the IRE for that as well, but have a look at the accuracy. In my oinion, XAI has the best one. Plz let me know, if this helps Thanks

Siddharth-Latthe-07 avatar Jul 04 '24 09:07 Siddharth-Latthe-07