standards
standards copied to clipboard
[Feature Request] record aborted testcases
Tests that don't finish don't have a link to a log file, which is bad. - Reason is that the script searches for the last result (PASS/FAIL), if test did not yield any results, it is not linked - Solution: I no data is found, link to last attempt
I'm afraid the problem is more severe: the testcases can be missing in the report, and then, of course, they can't be inserted nor found in the database.
A check script can output results for multiple testcases, and if its execution is cut short (by a badly handled exception for instance), then some testcases can be missing. (This holds true for the case that the number of testcases is 1.) We don't know prima facie that these testcases are missing, because we don't know that the script is supposed to output them; they could fall under the responsibility of some other script, or they could be missing intentionally because they are expensive to check and not checked every time.
I wouldn't want to make the specs more complicated by having them include which testcase is to be expected where and when. What I would do though:
- improve the quality of the check scripts throughout to make sure each testcase is reported on so long as it was intended to be checked
- add a third result besides "PASS" and "FAIL", namely "ABORT" (or similar)
- insert "ABORT" results into the database (recall that previously they would be ignored)
With these measures in place, there is a much higher chance that the report contains all relevant testcases, and we can distinguish "ABORT" from missing. This distinction is only relevant for debugging: both results have the same effect for the status of the subject under test, but the former one means that debug logs are present.
@depressiveRobot the first step here would be to make test scripts more robust in general. To go through them one by one, and in each case make sure that every test case is reported on, potentially as ABORT, except something extreme happens (such as when the Python interpreter is killed or the whole pod/VM is killed). This would indeed be a good first issue to get acquainted with the scripts. It also has enough room for creativity, because you can work on a kind of reproducable pattern for making these scripts robust.
Done in #975
I left it open on purpose, because there is still work to do (e.g., for KaaS)
Ah ok, sorry for that.