Results 75 comments of InterpretML

Hi @Nima-pw, Glad to hear about your interest! Could you elaborate a bit on your question for us? You can find our source code for all explainers here: https://github.com/interpretml/interpret/tree/develop/python/interpret-core/interpret in...

Hi Chad, Thanks for bringing this up! If you visit the URL posted in the printed output ("http://127.0.0.1:7001/..."), you should see the visualization output from `show()`, which is hosted on...

Hi @aman63 -- We're happy to hear that you're finding the package useful. It's possible to combine these models by simply averaging the additive_terms_ fields between them. This won't generate...

Hi @malakar-soham, Good question! The overall summary is the _mean absolute_ value of the scores across the training set. Another way to think of it is "on average across the...

Hi @jayswartz - just got to this, sounds like a mess! We haven't tried it on DigitalOcean Droplet before so not sure about anything environmentally specific to it. In terms...

Hi @jayswartz, Haven't seen that error before on the client but your intuition sounds right, I will have to check this. If it's available, do you see this error on...

Hi @minor6th -- The R package is still very new and there's a lot more work we need to do in order to give it the same level of features...

Hi @shneezers, Thanks for bringing this up! With the way the code is written now, it's not easy to extract the computed FAST scores. FAST currently calculates scores for each...

Hi @OGK0, Thanks for raising this issue. Unfortunately the sensitivity analysis code currently has no support for categorical features, so we have to provide outputs at the component level instead...

Hi @onacrame, Great point -- our default validation sampling does stratify across the label, but unfortunately does not customize beyond that. Adding support for custom validation sets (which are only...