alibi
alibi copied to clipboard
Algorithms for explaining machine learning models
Hello, I am trying to generate counterfactuals for a dataset which contains both categorical and continuous variables. The categorical variables are mostly binary in nature (along with some non ordinal...
A warning may be issued in certain scenarios when everything actually works as expected, for more details see: https://github.com/SeldonIO/alibi/issues/384#issuecomment-819642570
Hello, I have a model with categorical and numerical features, I would like to know if it would be possible to use as inputs ordinally encoded categorical variables but without...
Because of https://github.com/SeldonIO/alibi/pull/398 it is not clear how to handle serialization of IG explainers specifying custom layers not accessible via `model.layers`.
Hi, I'm working on a project in which we use different deep learning models and I want to use alibi to have a better insight into the way the models...
Some language models support a limited number of tokens to be processed at once. Thus, the language mode extension of AnchorText splits the text in two `text = head +...
Hi, I'm finding that my ONNX image classification model (loaded with the ONNX package and converted to TensorFlow) works with AnchorImage but not with Counterfactuals or CEM. I've tried providing...
Not high priority as this is technically an undocumented/unsupported component. Need to also check if `KernelShap` with distributed options has a similar behaviour which would be high priority to fix.
- Consider changing the visibility of predictor from public to private, and update the documentation examples. Instead of using the `explainer.predictor`, use directly the original `predictor`. - The predictor is...
Currently, the `unknown` and `similarity` perturbation strategies implement a column-wise sampling procedure of the words to be replaced by `UNK` token and by similar words, respectively. For a column `i`...