interpret
interpret copied to clipboard
Fit interpretable models. Explain blackbox machine learning.
Hi devs, I'm having some confusions reading the graphs so I'll give a context and the confusion I'm facing ### Context I'm working on a NLP , multiclass problem and...
Hi folks! I am exploring this world of XAI in order to adapt it to Motion Prediction for Autonomous Driving. My inputs are map features (such as lanes, nodes, etc.)...
Hi I would like to use LIME interpretation for the dataset, where i have done preprocessing like OneHotCoding, Imputer, Calculating new feature using ColumnTransformation. I found the below sample code...
I'm testing out running a few models and creating an interpret dashboard. Currently, my code runs with no error messages, and if I run to show a singular model e.g....
Hi, This is more of a question, rather than an issue, but I was wondering if it was possible to see the FAST results for all the interactions that are...
I have a model (fit during an Azure automated ml run) that predicts on a dataframe just fine, but fail when the model and dataframe are passed to interpret functions...
Trying to install this in Google Colab with pip I get some issues After restarting the runtime, it cannot find the library ``` ERROR: pip's dependency resolver does not currently...
Hi, I tried training a ExplainableBoostingRegressor using Dask arrays, but I keep running into the following issue: ```python ERROR:interpret.utils.all:Could not unify data of type: --------------------------------------------------------------------------- ValueError Traceback (most recent call...
Hi With the Black box explanation technique as follow we have different visualization available. I would like to clarify about kind of visualization available. LIME -> We can use standard...
Hi, 1. How do I get LIME results for a multiclass problem? When I operate the code from the examples (with y_test which are one-hot encoded vectors) the results I...