lit icon indicating copy to clipboard operation
lit copied to clipboard

Global Interpreters

Open shanky-kapoor opened this issue 4 years ago • 4 comments

https://github.com/PAIR-code/lit/blob/a88c58005c3b15694125e15e6165ee5fba7407d0/lit_nlp/app.py#L326

Hey, I see all the explainers at (Lime, GradNorm, IG etc) at a local/instance level. Is there anything planned for explainers at a global/model level as well? Or am I missing anything let me know.

shanky-kapoor avatar Feb 04 '21 22:02 shanky-kapoor

By global/model, do you mean things like metrics? The interpreter API is very general and we use it for both; in the case of something like a salience map (such as GradientNorm()) it usually just gets called on a single example (i.e. a list of length 1), but something like metrics will be called on the whole dataset or a slice of it.

iftenney avatar Feb 04 '21 22:02 iftenney

No, I mean like how in Naive Bayes models we get feature importance at the model level. Is there a plan to do some kind of aggregation of local explanations which eventually becomes the global explanations? Please look at the following papers it would make much more sense.

https://arxiv.org/pdf/2003.06005.pdf https://arxiv.org/pdf/1907.03039.pdf

shanky-kapoor avatar Feb 04 '21 23:02 shanky-kapoor

Yes! It's on our roadmap to support something like this. We don't have it in the UI yet, but in the mean time you can use the salience components offline (e.g. in a notebook) and perform your own aggregation on the results.

e.g. to run LIME on the first 100 examples from a dataset

dataset = SSTData(...)
model = SentimentModel(...)
lime = lime_explainer.LIME()
results = lime.run(dataset.examples[:100], model, dataset)

iftenney avatar Feb 24 '21 20:02 iftenney

Yes! I was able to replicate results from second paper above. Is there any document of the roadmap which I can refer?

shanky-kapoor avatar Mar 08 '21 21:03 shanky-kapoor

Closing this issue due to inactivity.

RyanMullins avatar Jan 16 '24 15:01 RyanMullins