mllint icon indicating copy to clipboard operation
mllint copied to clipboard

New Linter - Data Quality

Open bvobart opened this issue 4 years ago • 0 comments

The goal for this issue is to build a linter for the Data Quality category to check whether and how a project is using tools to assert quality standards on its input data, e.g. GreatExpectations, TFDV and (maybe) Bulwark, the successor of Engarde.

Firstly, we should figure out, primarily for GreatExpectations and TFDV:

  1. How to apply these tools to a ML project? What generally needs to change about a project in order to implement such a tool? How much effort does this take? The latest branch of the basic project in mllint-example-projects repo can be used as a base for this.
  2. What constitutes effective use of these tools? What kind of checks should / would ML engineers want to implement on their data? Are there any default checks that should always be enabled, or should the user create a certain set of their own checks somehow?
  3. How could mllint measure and assess whether a project is making effective use of these tools?
  4. How could mllint measure and assess whether the checks made by these tools passed? This could entail running GreatExpectations or TFDV in a similar way to what we do for Code Quality linters (Pylint, Mypy, etc.) and parsing the output (bonus points if this output can be formatted in a machine-readable way such as JSON or YAML).

Then, to implement it:

  • [ ] Figure out the answers to the above questions.
  • [ ] Determine which linting rules mllint will use to check whether a project is using GreatExpectations correctly
  • [ ] Determine which linting rules mllint will use to check whether a project is using TFDV correctly
  • [ ] Implement the linter to check these rules (just copy the template linter and start editing that)
  • [ ] Implement tests for the linter
  • [ ] Write the documentation to go with those rules
  • [ ] Write the documentation for the category

bvobart avatar Jun 21 '21 18:06 bvobart