Ola Zytek
Ola Zytek
For some reason, transformers seem to modify the original input in some cases. See `test_explainer.test_transform_x_with_produce` for an example of where this happens. This shouldn't be the case.
A common need in decision-making is to understand not just what a model says the right decision is, but to understand the expected cost and benefits of a decision. Given...
In some cases, the model will output a different value than the one stored in target values - for example, the model may output a probability, or a thresholded bin...
Permutation feature importance, currently implemented by `PermutationFeatureImportance` explainers, assumes fully independent features, which is often not the case. Its accuracy can be improved by permuting correlated features together (see `sklearn`'s...
The full dev-ops workflow of using Poetry to lock dependencies and updating libraries manually or with dependabot can be confusing for developers not familiar with these processes. We should add...
Currently, Pyreal accepts only tabular data. For future use-cases, we need to support time series data as well, which is an interesting and complicated problem in explainable ML. This epic...
The time series explainers should all be updated to support Pyreal's full explanation transform workflow, with at least one example each for data transforms and explanation transforms (for example, `pad`)....
This issue extends #204 , with additional support for time series where each row in the dataset can have variable lengths. This will likely require a change in how we...
Currently, `shap` does not work on models that aren't callable (ie, calling the model directly makes predictions). Pyreal, however, only requires models to have .predict() functions. We can fix this...
Explainers should take in an optional list of model parameters that will be passed to the model.predict() function, allowing for functionality like suppressing model debugging calls. As part of this,...