Results 75 comments of InterpretML

Hi @andro536 , The 0.2.0 release of interpret had a few breaking changes which included some attribute renames. The new name of the property is `additive_terms_` -- just substituting that...

Hi @candalfigomoro, The overall feature importances are simply calculated as a mean absolute contribution per feature on the training dataset. Our code for calculating them is just a few lines...

Hi @Dola47, Good question! Each of the graphs from the show function are Plotly figures, so you can save them using the Kaleido library that Plotly publishes (assuming your environment...

Hi @MeredithHilgeman - thanks for the suggestion. Will need some more information about your notebook environment? Is this a cloud based Jupyter notebook or lab?

Hi @kspieks, Good questions! Here are some answers: 1) Yes, in practice you should drop one version of this feature. We just leave it in as a demonstrable example of...

Hi @ShrutiVekariya, Thanks for bringing this up. We intended for the right side to show the true class and its predicted probability from the model, but this has become too...

Hi @Dola47, Thanks for the questions! Let me answer them one at a time: > I see that your are not doing that, and you just call the predict_prob of...

Hi @JoshuaC3, Thanks for bringing up this detailed and interesting discussion! EBMs actually already do a stagewise training procedure of fitting main effects (or 1st order terms) first, and then...

Hi @p9anand - thanks for bringing this up! We're working on missing values at the moment. The approach we're following is to treat missing as its own special value, and...