improve ergonomics of test execution
The test format implementation is difficult to work with for a number of reasons.
Cannot Run Coverage Code Locally
Attempting to run
pytest tests/test_extensions.py::test_substrait_extension_coverage
locally results in errors like
__________________________________________________ ERROR collecting tests/test_extensions.py __________________________________________________
ImportError while importing test module '/Users/victor.barua/scratch/substrait/tests/test_extensions.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/test_extensions.py:4: in <module>
from tests.coverage.case_file_parser import load_all_testcases
tests/coverage/case_file_parser.py:4: in <module>
from antlr4 import CommonTokenStream, FileStream
E ModuleNotFoundError: No module named 'antlr4'
=========================================================== 1 error in 0.02 seconds ===========================================================
ERROR: not found: /Users/victor.barua/scratch/substrait/tests/test_extensions.py::test_substrait_extension_coverage
(no name '/Users/victor.barua/scratch/substrait/tests/test_extensions.py::test_substrait_extension_coverage' in any of [<Module test_extensions.py>])
even with then antlr4 runtime available
➜ pip3 freeze
antlr4-python3-runtime==4.13.2
...
As a result I cannot test changes locally AND because this is included as a pre-commit hook, not of my commits are able to use the hooks.
Adding Tests Requires Mucking With Coverage Asserts
When trying to fix the lints in https://github.com/substrait-io/substrait/pull/769 as a result of me adding coverage to existing tests, I have to muck with some of the numbers in https://github.com/substrait-io/substrait/blob/2d8b1b673718df3deb9deae73de1f53dedba75f1/tests/test_extensions.py#L10-L38 It's not clear what I need to be modifying based on the error, and also it's hard to verify what needs to change as I can't run this locally.
Desired State
In order of increasing opinionatedness
- If were going to have Python tooling, we should have way to configure environments reproducibly (i.e. requirements.tx/pipfiles/etc) so that folks can execute test and hooks locally.
- We should seperate out the execution of the tests from coverage.
- CI should block on test failures, but not on coverage changes. Coverage should be a report that we can look at.
@scgkiran , can you or someone on the team pick up the first two items. I think the third is more complicated to get right.
For item 2, let's make it so if the test fails against baselines, it reports the new baselines versus the old and the file that should be updated if the baselines are wrong.
For item 3, my thinking has historically been that we update the code so it prints out some kind of "coverage file" that codecov understands so we can use codecov capability for determining baselining, etc rather than having to build that part in github actions ourselves. Would love it if someone was excited to pick that up!
For item 2, raised PR
For item 1:
Now pytest is part of PR workflow.
Locally I installed antlr using pip install antlr4-python3-runtime==4.13.2. I don't remember doing anything else.
Based on PR workflow, requirements.txt will look like
black==22.3.0
flake8==7.0.0
pytest
antlr4-python3-runtime==4.13.2
pip3 install -r requirements.txt
@vbarua can you try this? if this works, try running the tests in same terminal.
Using that as a base, I was able to get the tests running locally. I've added some additional Python configurations to the build to make this more repeatable for future userss in https://github.com/substrait-io/substrait/pull/781