upload json-formatted coverage reports
As per this comment, add an upload of json-formatted coverage test reports so that this tool https://carreau.github.io/pytest-json-report-viewer/ can be used to analyze the time spent in tests
The PR workflow needs approval to run
Sorry about that, I hadn't gotten a notification for this. 🙇
It seems something is off with all the windows runs. Any ideas?
Unfortunately no. Searching for the error just brings up old issues that doesn't seem relevant.
And I'm not set up on any Windows machines to try and debug. :(
Hey @mattip
Sorry for the delay, but I finally got the Windows tests to behave: https://github.com/pytest-dev/pytest-html/pull/522
As soon as that's merged, I would really appreciate if you would resume looking into this. 🙇
Rebased off master to pull in the fixes for windows. The CI run needs approval to start.
Note that the windows pypy-38 is deactivated. If you want that to be part of the report.
I needed to add --durations=0 to actually get the test durations output. Duh. In any case, this seems to work: downloading the artifact, unzippping it, and uploading the files to https://carreau.github.io/pytest-json-report-viewer/ is starting to show something.
Could you trigger the CI when you get a chance?
This is working, there was an issue in the viewer which is on its way to being fixed. Although I am not sure currently of the added value over looking at the durations report, here for py37mac and here for pypy37mac. For instance, testing/test_pytest_html.py::TestHTML::test_collect_error (why are there two?) clocks in at 37 secs on PyPy, and 0.45secs on CPython.
This is working, there was an issue in the viewer which is on its way to being fixed. Although I am not sure currently of the added value over looking at the durations report, here for py37mac and here for pypy37mac. For instance,
testing/test_pytest_html.py::TestHTML::test_collect_error(why are there two?) clocks in at 37 secs on PyPy, and 0.45secs on CPython.
There are several tests that are reported twice.
I wonder if it's related to: https://github.com/pytest-dev/pytest-html/issues/508
Regardless, I have no clue why that happens.
The most interesting thing is, does the test really run twice or is it just reported twice? 🤔
Care to revisit this @mattip ?
If not, feel free close. 🙇
I will continue this at some point in the weekly CI runs of PyPy + pytest-html. So I will close this, thanks.