alibi
alibi copied to clipboard
Algorithms for explaining machine learning models
This is a tracking issue for implementing a general validation strategy for potentially conflicting user arguments. There are multiple places in the code base where such validation is missing, e.g....
This issue is to track discussions around what parameters a user should be able to override at `explain` time regardless of the explainer setup at construction. Initial discussion can be...
This issue is to track discussions around what validation should happen at `__init__` time, specifically related to validating inputs/outputs of user models and other callables. Initial discussion can be found...
As part of ongoing work to integrate `alibi` runtime in `mlserver`, we realised that for `IntegratedGradients` if the inference model outputs a class score (e.g mnist model handwritten digit recognition),...
@sakoush found that calling `reset_predictor` right after `__init__` on `AnchorTabular` results in the following error: ```python Traceback (most recent call last): File "make_test_models.py", line 261, in _main() File "make_test_models.py", line...
As noticed by @sakoush, in the case of `TreeShap` the explainer is serialized together with the underlying `shap.TreeExplainer` object which in turn holds a reference to the white-box model which...
As @sakoush pointed out, for some upstream use cases (specifically `seldon-core` and `mlserver`) it is not desirable to call the passed `predictor` with random data at `__init__` time like we...
We might also do some validation of the explain parameters that are being passed from config. currently I guess the explainer will throw an error but perhaps we could have...
Currently, all parameters, correct or incorrect (misspelled), are included in the metadata. https://github.com/SeldonIO/alibi/blob/390a255403d61e8d7f87123f745b678b0a5e6753/alibi/explainers/anchor_text.py#L1229 The valid parameters are stored in `self.perturb_opts`, which is set along with `all_opts` in: https://github.com/SeldonIO/alibi/blob/390a255403d61e8d7f87123f745b678b0a5e6753/alibi/explainers/anchor_text.py#L1220-L1222 This should...
I am trying to explain the object detected via any object detection model just as an image classification model using Seldon alibi AnchorImage algorithm. I modified my prediction function such...