Enhancement: Jumping between implementation and test
I believe this package would be more rounded if it provided an easy way to jump between implementation and test like this nice little package allows:
https://packagecontrol.io/packages/Test%20Switcher
The link resolves to a 404 ☹️
There is no 1 to 1 relationship between a test and the actual implementation code so I wonder how this should work.
E.g. if a test fails the affected implementation can be traversed, like using go-to next result/error, using the tracebacks. Otherwise I use the go-to definition etc. functionality.
You need a good idea here.
Indeed there isn't a 1 on 1 relationship - still, the convention for naming tests can be our friend here. Usually, "test" is used as a prefix for marking test files.
Try out this package, which works like a charm for jumping between implementation and test file: https://packagecontrol.io/packages/Test%20Switcher
Well this one jumps from file to file. I don't think that's very interesting, it's okay. To jump 'to the implementation' from a test, or vice versa, to a test given any source line, is an interesting but hard problem.
File jumping is still really useful and would make a great addition to this package in case you aim to provide additional complementary functionality.
I see your point of bidirectional mapping of test and implementation being a hard problem and admittedly I hoped you had an idea how to implement such. So I assume you are seeing value in implementing #21 #22 if we were only given a good enough solution for that very problem?
Both get very, very interesting if you do more than the trivial, by convention, thing. Both are very, very hard problems.
While I personally adhere to the naming convention of tests, I see this approach might not be robust enough to work at large. The mapping between implementation and test might well be a show-stopper for this issue as well as #20 #22
Should we keep the issues open in case some other genius might eventually walk by and enlighten us with a solution to the mapping problem?
I think it's interesting, but jumping to 'the' implementation is ... difficult. Say you have 'coverage' installed and running. You could start and stop coverage reporting on enter/exit of a (each) specific test function. After that every covered line is 'the' or part of the implementation of this particular test. But that's not a jump point, it's a possibly wide range over the source code.
What can we do with this or these datasets? Can we visualize them in a meaningful way? How can we combine them to filter out likely noise?
Ref https://github.com/tarpas/pytest-testmon Ref https://nedbatchelder.com/blog/201810/who_tests_what_is_here.html esp. https://nedbatchelder.com/blog/201905/coveragepy_50a5_pytest_contexts.html
@kaste I am going to look into this the coming weekend. Short of plainly writing the implementation's object path straight into the docstring of an unit test, those resources might point to a potential solution.
Some preliminary findings / crude ideas:
1 - Mapping line numbers to function
Line numbers can be mapped backed to functions via regex. This plugin https://packagecontrol.io/packages/PythonStautsBarShowSymbol displays the current function in the status bar based on caret position. However, it fails to provide the correct function name if there are any nested functions.
2 - Mapping tests to line numbers
coverage.py's Who tests what feature collects coverage into sqlite3 file. The mapping between test cases and tested lines can be retrieved via:
SELECT path, lineno, c.context
FROM line AS l
JOIN file AS f ON l.file_id = f.id
JOIN context AS c ON l.context_id = c.id
ORDER BY path, lineno
Possibly relevant: https://github.com/nedbat/coveragepy/issues/747