evalml
evalml copied to clipboard
Add model understanding integration tests
In #2815, we added integration tests for DataCheckActions + DataChecks.
I think another potential use case is doing AutoMLSearch + model_understanding. We do something similar with LG so I'm open to hearing thoughts as to whether or not this is useful.
If we decide to stick with LG for this kind of test, there may be ways to improve the coverage for model understanding:
- Test more logical types. I think right now we just use the first column in the holdout set for LG. This could replace some of our existing unit tests that run model understanding on the kind of pipeline we'd get out of AutoMLSearch and verify the model understanding can run without crashing!
- Test more parameter combos
Just wanted to start the conversation!
IMO adding evalml-side integration tests would be better for developer iteration (until we add LG check in tests) as the results come back faster, you'll be able to run these tests locally, and also improve/alter these tests easily. So +1 from me on adding integration tests so that we can find such API inconsistencies or bugs on the front line and then test final use cases and interactions with LG.
+1 to adding integration tests!
Correct me if I'm wrong, but right now, we're not keeping track of "performance" metrics with our model understanding tools in LG right? Just that the model understanding tools run and don't error out? This seems much more like the job of an integration test :)
Notes from discussion today:
- Biggest value of having this suite on top of LG suite: immediate feedback to devs
- Let's do it