Ilya Matiach
Ilya Matiach
@lucazav I just learned that in workspace 2.0 you can share the dashboard with other team members in the workspace, but you can't share it with anyone external to the...
hi @SamiurRahman1 the score can be computed via get_surrogate_model_replication_measure which was just made public as part of resolving this issue: https://github.com/interpretml/interpret-community/issues/452 and PR: https://github.com/interpretml/interpret-community/pull/495 we currently don't have other metrics,...
an amazing free book on interpretability has a great chapter on global surrogate models: https://christophm.github.io/interpretable-ml-book/global.html I think the sections on advantages and disadvantages summarize this method very well. Note it...
note that we use accuracy metric for classification and r^2 for regression currently: ``` def get_surrogate_model_replication_measure(self, training_data): """Return the metric which tells how well the surrogate model replicates the teacher...
"i have read several research papers about different metrics like stability, robustness and efficiency" interesting, can you point to the papers specifically, maybe some of these could be implemented in...
I have a hard time believing the second paper's results that LIME is better than SHAP - perhaps on that dataset, but for LIME you need to set the kernel...
@amir78pgd sorry, yes, what if analysis and ICE plots are disabled on some compute environments, because it requires the UX to make a request to the flask service running in...
it seems we need to update to tensorflow >2.5.0 for python 3.9, but this breaks the deep explainer in shap, so we can't really do that, unless we disable those...
you can try upgrading tensorflow in the test dependencies but I think you might start hitting issues in the tests, specifically for shap's DeepExplainer
@gaugup it is cached in the individual explainers (eg mimic explainer, see: https://github.com/interpretml/interpret-community/blob/0fff38037194a4dd277fe0c6555a52415e417b7b/python/interpret_community/mimic/mimic_explainer.py#L305), it is used to put it on the explanation object, (eg see https://github.com/interpretml/interpret-community/blob/0fff38037194a4dd277fe0c6555a52415e417b7b/python/interpret_community/mimic/mimic_explainer.py#L463). Maybe we can add...