colcon-core icon indicating copy to clipboard operation
colcon-core copied to clipboard

tests supposed to run with pytest shouldn't fall back to unittest in case of problems

Open rotu opened this issue 5 years ago • 9 comments

http://build.ros2.org/view/Fci/job/Fci__nightly-cyclonedds_ubuntu_focal_amd64/40/consoleText

Although the underlying issue was fixed in #317, colcon test incorrectly reported that testing completed successfully and was not flagged on CI.

Starting >>> rqt
[967.160s] ERROR:colcon.colcon_core.task.python.test:Exception in Python testing step extension 'pytest': 'NoneType' object is not iterable
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/colcon_core/task/python/test/__init__.py", line 54, in test
    matched = extension.match(self.context, env, setup_py_data)
  File "/usr/lib/python3/dist-packages/colcon_core/task/python/test/pytest.py", line 49, in match
    return has_test_dependency(setup_py_data, 'pytest')
  File "/usr/lib/python3/dist-packages/colcon_core/task/python/test/__init__.py", line 214, in has_test_dependency
    for d in tests_require:
TypeError: 'NoneType' object is not iterable


----------------------------------------------------------------------
Ran 0 tests in 0.000s

OK
--- stderr: rqt

rotu avatar Mar 12 '20 23:03 rotu

There are multiple way you can run tests for a Python package. With the first extension failing colcon does attempt to use the next extension which in this case passes:

Ran 0 tests in 0.000s

OK

I don't think colcon has a way to distinguish this failure. It already prints a big error message. The only option I see is to not fall through to the next extension and imo that would be a step backwards in terms of robustness.

dirk-thomas avatar Mar 12 '20 23:03 dirk-thomas

I think colcon should continue to run further tests (unless --abort-on-error is true) and return a non-zero error code when it finishes. Possibly, it should add 1 to the error count in the test suite and possibly it should add a test case in the xml output to represent the error encountered.

rotu avatar Mar 12 '20 23:03 rotu

I think colcon should continue to run further tests (unless --abort-on-error is true)

I am not referring to other tests from other packages. colcon did run the found tests (only 0 using python setup.py test) of that specific package. The first extension which can run Python tests had and error but a second extension ran the tests it found just fine.

So my question again: do you suggest to get rid of the fall back behavior? As I mentioned above that fall back behavior is very much desired in a lot of other cases for increasing robustness. How else are you suggesting colcon should distinguish this intentional fall through to another extension (which does the requested job just fine)?

dirk-thomas avatar Mar 12 '20 23:03 dirk-thomas

  1. I think any exception other than a SkipExtensionException should cause an error message and a nonzero return code. Maybe every extension that returns True from match() should run. (btw, setup.py test is now deprecated, so using it to kick off pytest isn't something to worry about for long)
  2. Unfortunately the fallthrough isn't really appropriate. If a test suite is supposed to run with pytest and instead runs with unittest, it's very unlikely the results are at all sensible.

rotu avatar Mar 13 '20 04:03 rotu

I think any exception other than a SkipExtensionException should cause an error message and a nonzero return code.

See https://github.com/colcon/colcon-core/pull/322#issuecomment-604103290 why I think this goes against the philosophy to be as robust against failures as possible.

Maybe every extension that returns True from match() should run.

No, that is not a viable option. All extensions are considered in priority order and the highest one which matches will be responsible to perform the task. Multiple extensions performing the same task doesn't make any sense.

Unfortunately the fallthrough isn't really appropriate. If a test suite is supposed to run with pytest and instead runs with unittest, it's very unlikely the results are at all sensible.

This is the essential point. Therefore I think a solution should address this exact case - the two extensions in question shouldn't be both considered in this case.

dirk-thomas avatar Mar 25 '20 21:03 dirk-thomas

I don't know that I agree. It seems like one extension to run functional testing and another to run performance testing seems like a plausible use case. Or maybe a package has some Pytest tests and some CMake tests.

Trundling on despite errors is not a good design choice. This error has to be made visible; at least visible enough that there's a chance of catching such a bug amidst a mountain of CI output.

rotu avatar Mar 25 '20 22:03 rotu

It seems like one extension to run functional testing and another to run performance testing seems like a plausible use case. Or maybe a package has some Pytest tests and some CMake tests.

Neither of such cases currently exist and are even possible. A single extension is in charge of performing the action. It can choose to delegate the work to other extensions but that is not changing that a single entry point is being chosen to do the work.

Trundling on despite errors is not a good design choice.

It feels to me that you want to describe things in the most negative way possible - even if I comment in this thread extensively why things are done the way they are (which you happily ignore afterwards). I also clearly stated that the fallback from pytest to unittest is undesired and should be addressed.

I changed the title to what I see as the acceptance criteria for this to be closed.

dirk-thomas avatar Mar 25 '20 23:03 dirk-thomas

It feels to me that you want to describe things in the most negative way possible - even if I comment in this thread extensively why things are done the way they are (which you happily ignore afterwards). I also clearly stated that the fallback from pytest to unittest is undesired and should be addressed.

I'm sorry. I did not intend to be negative and I did not intend to raise hackles. I'm sure you understand how important it is to CI that tests are run as designed and the results made visible. I'm going to back off this issue and let you resolve it as you see fit. I'm sure I don't have the same insight into the architecture of Colcon as you do.

rotu avatar Mar 25 '20 23:03 rotu

Also updated the ticket to link to fix the link to the log.

rotu avatar Mar 25 '20 23:03 rotu