pytest-workflow icon indicating copy to clipboard operation
pytest-workflow copied to clipboard

Stream output as test runs

Open jchorl opened this issue 1 year ago • 4 comments

Greetings! Thanks for the wonderful library!

I was wondering if there is a way to stream output while a test is running (more specifically, streaming stderr). It would be very helpful to track progress and get details of what is currently running, particularly with long-running tests, especially in CI where it's difficult to tail files.

I did put up a sample commit of one way to do this, but it's a bit invasive: https://github.com/LUMC/pytest-workflow/commit/df2e9dc130f926dd886135f8f1e37fb2f63c0ae9

Anyway, opening issue for discussion before a PR to see if you've given this thought and if there's a good way to do it.

Thanks again!

jchorl avatar Jul 15 '24 23:07 jchorl

Hi. Well this will quickly become unusable when workflow threads > 1 because all the workflows will report intermingled. You can also use the tail application (on linux) to check the log files directly. Pytest-workflow outputs the location of the logs on stdout so you can check the log files being written directly. Does that help you?

rhpvorderman avatar Jul 16 '24 06:07 rhpvorderman

Thanks for the response.

I agree that things get challenging with multi-threading. One example of how another testing system handles this is bazel's --test_output=streamed: https://bazel.build/reference/command-line-reference#flag--test_output

From the docs:

'streamed' to output logs for all tests in real time (this will force tests to be executed locally one at a time regardless of --test_strategy value).

There are cases where tail is not possible. From original post:

It would be very helpful to track progress and get details of what is currently running, particularly with long-running tests, especially in CI where it's difficult to tail files.

To build on this, imagine you have a CI job that typically takes one hour running on GitHub Actions. It's difficult to have any insight into what the job is actually doing (is it stuck?) until it just times out or finishes. If the job usually takes an hour and you're at 1h30, you then need to decide whether to cancel it and lose progress, or wait until it times out (could be e.g. 4h or 12h timeout). Both aren't great!

Another case where streaming output is valuable is when debugging a test-case locally that times out. Sure, you can continuously switch to tailing. Note that you can't really have a persistent tail running because each pytest run should recreate the log file. So you'd need to continually run, switch terminals, tail, wait til it gets far enough, switch back terminals and cancel.

A less common case is tailing stderr for keywords and shutting down a CI job, which may be easier than tailing a file in a test-specific directory.

Anyway, it's not a deal-breaker. I think there are cases where this functionality would be a notable quality-of-life improvement so I'm wondering if there is appetite for such a feature in this repo.

jchorl avatar Jul 17 '24 02:07 jchorl

There is always an appetite for features that are so well-motivated. Indeed this would be of great benefit in the CI.

This feature and workflow threading can simply be made mutually exclusive in the argument parser.

rhpvorderman avatar Jul 17 '24 04:07 rhpvorderman

Thanks for accepting! I'll get a PR going when I get the chance and we can try to find an elegant implementation.

jchorl avatar Jul 19 '24 18:07 jchorl