feedback icon indicating copy to clipboard operation
feedback copied to clipboard

Feedback about Test Analytics and Flaky Test Reporting

Open rohan-at-sentry opened this issue 1 year ago • 30 comments

Thanks for dropping by! 👋

We've recently released a whole new feature around Test analytics and are working on reporting Test Flakes on your PR ❄️ .

We'd love to hear feedback on

  • How your setup experience was.
  • How easy/useful the PR comment is

This issue is intended to share and collect feedback about the tool. If you have support needs or questions, please let us know!

rohan-at-sentry avatar Mar 14 '24 20:03 rohan-at-sentry

  • How your setup experience was.
    • Very easy, worked on the first go after adding the new GitHub action step.
  • How easy/useful the PR comment is
    • I like that it is combined with the existing coverage comment versus being a separate one.

In the event this action is not used in the context of a pull request (i.e. a scheduled test, pre-release test, etc.), it would be nice to be able to also display the results in GitHub's Job Summary in lieu of a PR comment. We currently use https://github.com/phoenix-actions/test-reporting to do this and it works nicely. When sharing test results, permalinking to the job summary works better due to the bigger screen size it can occupy. If this feature was added, we'd switch over completely!

houserx-jmcc avatar Mar 29 '24 20:03 houserx-jmcc

An update: we're also seeing the below error intermittently that we have been unfortunately seeing in v4 of the standard codecov-action (https://github.com/codecov/codecov-action/issues/1280):

Error: write EPIPE
    at afterWriteDispatched (node:internal/stream_base_commons:160:15)
    at writeGeneric (node:internal/stream_base_commons:151:3)
    at Socket._writeGeneric (node:net:952:11)
    at Socket._write (node:net:964:8)
    at writeOrBuffer (node:internal/streams/writable:447:12)
    at _write (node:internal/streams/writable:389:10)
    at Socket.Writable.end (node:internal/streams/writable:665:17)
    at Socket.end (node:net:7[22](https://github.com/houserx/platform/actions/runs/8513226507/job/23316490519?pr=8922#step:12:23):31)
    at module.exports (/runner/_work/_actions/codecov/test-results-action/v1/node_modules/gpg/lib/spawnGPG.js:50:1)
    at Object.call (/runner/_work/_actions/codecov/test-results-action/v1/node_modules/gpg/lib/gpg.js:28:1)

houserx-jmcc avatar Apr 01 '24 20:04 houserx-jmcc

Thanks for your feedback @houserx-jmcc - we're looking into adding support for reporting on GH Job summary as well

Re the issue with EPIPE - yeah it's something we've noticed. We're still working on a fix for this

rohan-at-sentry avatar Apr 02 '24 14:04 rohan-at-sentry

@houserx-jmcc out of curiosity what non PR usecases are you using https://github.com/phoenix-actions/test-reporting for?

Off the top of my head, I'd guess something along the lines of running tests before deploy (of a production image/release) etc. Are there other usecases that you can share?

rohan-at-sentry avatar Apr 02 '24 20:04 rohan-at-sentry

Off the top of my head, I'd guess something along the lines of running tests before deploy (of a production image/release) etc. Are there other usecases that you can share?

Yup! That is one. Others include: running the tests on a stable branch while investigating suspected flakey tests, browser tests we run after deployments to increase confidence, and scheduled tests that periodically perform data quality checks against our stable environment databases. We use Jest for all these use cases and thus can easily pipe the XML into the test action reporter.

And for viewing historical test results on a pull request, the job summary is tied to each workflow execution. The PR comment is auto-updated though, so checking on past results becomes a bit more obtuse.

houserx-jmcc avatar Apr 02 '24 20:04 houserx-jmcc

@houserx-jmcc - is this directionally what you had in mind? (still early days!)

https://github.com/joseph-sentry/codecov-cli/actions/runs/8560172298

rohan-at-sentry avatar Apr 04 '24 19:04 rohan-at-sentry

Precisely! Looks great :)

houserx-jmcc avatar Apr 04 '24 21:04 houserx-jmcc

@houserx-jmcc - We're currently testing a new version of our test analytics feature internally where we're able to detect and report on Flaky Tests. At this time, I'm reaching out to everyone who has set up Codecov Test Analytics to invite feedback on the UI of our Flaky Test Feature. If this is of interest to you, please let me know and I'll reach out with some scheduling options.

rohan-at-sentry avatar Apr 08 '24 18:04 rohan-at-sentry

@rohan-at-sentry Happy to provide some additional feedback 👍

houserx-jmcc avatar Apr 08 '24 18:04 houserx-jmcc

thanks @houserx-jmcc. Here are some times that work for us (you can find time beginning next week if that works better) - https://calendar.app.google/7qTT3zeshUnrEHMy5

cc @Adal3n3

rohan-at-sentry avatar Apr 11 '24 16:04 rohan-at-sentry

I read through the docs a few times and I have a question. For context, I have a monorepo that I plan on setting up with codecov via the components feature where by I'll run the specs/generate a report for each component into a dedicated directory that'll then be uploaded all at once via the official GHA which will handle assigning each report to the proper component.

However, what I'd like to confirm is would doing something similar for the junit reports work in a similar way? Where codecov will just know which component to assign the test results to? Does this feature even work with components at the moment? Or is it global to a repo and long as I upload all the junit reports it'll just work?

Blacksmoke16 avatar Aug 08 '24 05:08 Blacksmoke16

@Blacksmoke16 currently yes the following is an accurate way to describe the system

global to a repo and long as I upload all the junit reports it'll just work

For clarity, right now we'd simply output these results onto the tests tab. You can see this in action on our own repos here Further -

codecov will just know which component to assign the test results to?

is not supported currently, but I'd like to learn more

We're thinking of ways to group around test suites and environments (those are things we've heard from customers we're working with), but I'd be curious to hear what other groupings are top of mind for you. Also, do you envision these "groupings" to be identical to the components you'd set up for coverage? If so, why?

rohan-at-sentry avatar Aug 08 '24 23:08 rohan-at-sentry

For clarity, right now we'd simply output these results onto the tests tab.

Ah, okay that example helps a lot. Basically is just a big list of all tests.

Also, do you envision these "groupings" to be identical to the components you'd set up for coverage? If so, why?

Uhh, I'm still in the reading/research/thinking phase of this as I have yet to actually get reports uploading or anything so I don't have a ton of useful insight just yet. However my initial reasoning was like, in my case at least, a component is a individual independent project that just happens to be in a monorepo. So it just made sense that tests output would be tied to a specific component to more easily see stats in that regard versus all components at once. Then you could more easily answer like "what is the most failure prone component?" Then, as you pointed out, flags would still make sense for filtering purposes to maybe spot issues like "our integration tests in staging have a lot higher failure rate than on prod" or something along those lines.

But honestly it could also make sense to have the tests tab be more like higher level stats/aggregates and the actual test results be more tightly coupled to a specific commit/pr? I'll be sure to report back if I think of anything else after actually getting things setup and playing with it more.

Blacksmoke16 avatar Aug 09 '24 01:08 Blacksmoke16

@rohan-at-sentry in one repo, test uploads crash fail with Upload failed: {"service":["This field may not be null."]} on GHA but not locally: https://github.com/codecov/codecov-cli/issues/486 / https://github.com/ansible/awx-plugins/actions/runs/10581340011/job/29318439675#step:23:47. I haven't been able to identify the exact cause. My understanding is that the confusing error is coming from Django in https://github.com/codecov/codecov-api. It would be useful to both improve logging and fix the issue in the uploader.

webknjaz avatar Aug 27 '24 16:08 webknjaz

Thanks, @webknjaz I opened https://github.com/codecov/engineering-team/issues/2437 to assess

rohan-at-sentry avatar Aug 27 '24 16:08 rohan-at-sentry

thanks @houserx-jmcc. Here are some times that work for us (you can find time beginning next week if that works better) - https://calendar.app.google/7qTT3zeshUnrEHMy5

cc @Adal3n3

Is the flaky test detection enabled automatically at this point? Or something we need to enable? Also happpy to provide some feedback. Seems like this has good potential but missing some key features that would make it more useful for us

kephams avatar Sep 05 '24 03:09 kephams

@kevinphamsigma yeah flaky test detection should be enabled at this point. I'd love to hear feedback. Can I get you to book some time with us using this link

rohan-at-sentry avatar Sep 05 '24 14:09 rohan-at-sentry

The JUnit XML files generated by meson are not parsed: https://github.com/jluebbe/rauc/pull/4#issuecomment-2348679752

In the web dashboard tab for tests, "No test results found for this branch" is shown.

jluebbe avatar Sep 13 '24 11:09 jluebbe

It would be useful if a wildcard could be used in the files: argument, so that multiple .junit.xml files in the same directory could be uploaded in one go.

jluebbe avatar Sep 13 '24 12:09 jluebbe

@jluebbe I'm going to track the parsing issue you're facing in a different issue - https://github.com/codecov/test-results-action/issues/86. I think we were able to root cause it.

Re - multiple files being uploaded from the same directory, we do this already. Can you point me to a run where it didn't work? (is it the same one that happened for https://github.com/jluebbe/rauc/pull/4#issuecomment-2348679752)

rohan-at-sentry avatar Sep 13 '24 14:09 rohan-at-sentry

Re - multiple files being uploaded from the same directory, we do this already. Can you point me to a run where it didn't work? (is it the same one that happened for jluebbe/rauc#4 (comment))

At least from the examples and argument documentation, it's not clear how to do this. Perhaps something like files: test-output/*.junit.xml would work?

jluebbe avatar Sep 13 '24 16:09 jluebbe

@rohan-at-sentry got another couple of bugs in the context of test file uploads:

  • upload file discovery/normalization can traceback on unexpected input: https://github.com/codecov/test-results-action/issues/89 / https://github.com/codecov/codecov-cli/issues/501

  • I tried uploading junit files produced by ansible-test sanity (it's a testing tool used in Ansible Collections ecosystem and ansible-core itself; it is able to produce reports regarding different linter runs) and Codecov bot is not happy about the format but does not provide any details on what it expected: https://github.com/ansible/awx/pull/15527#issuecomment-2351816493 — it'd be nice to have some specification to check against. Here's the implementation of said junit XML writer: https://github.com/ansible/ansible/blob/b5ae8a3/lib/ansible/utils/_junit_xml.py / https://github.com/ansible/ansible/blob/b5ae8a3/test/lib/ansible_test/_internal/test.py#L110-L127

  • note that the above problem is happening in the same project where reporting GH statuses is broken: #511 (this one is probably not related, but I'm mentioning it just in case)

webknjaz avatar Sep 16 '24 00:09 webknjaz

This feature is a great idea and I'm eager to try it out!

How your setup experience was.

Pretty easy. What caught me off guard was the naming convention enforcing file names ending on *junit.xml. That could either be mentioned more visibly or loosend.

How easy/useful the PR comment is

TBH I don't much like automated PR comments. I'd prefer to have that in a job summary. For the coverage reports, one can already disable the comments.

Bibo-Joshi avatar Sep 19 '24 19:09 Bibo-Joshi

At least from the examples and argument documentation, it's not clear how to do this. Perhaps something like files: test-output/*.junit.xml would work?

@jluebbe you are correct, it is missing from the documentation but you should be able to use wildcards / globs in the files argument

joseph-sentry avatar Sep 19 '24 20:09 joseph-sentry

Pretty easy. What caught me off guard was the naming convention enforcing file names ending on *junit.xml. That could either be mentioned more visibly or loosend.

@Bibo-Joshi this restriction is being loosened, not by much to be fair, we're now searching for *junit*xml and *TEST*.xml files.

It should be able to find any file matching that pattern in subdirectories automatically if disable_search is not set to true.

joseph-sentry avatar Sep 19 '24 20:09 joseph-sentry

@jluebbe - I've updated our docs to catch this, thanks for writing in https://docs.codecov.com/docs/test-result-ingestion-beta#troubleshooting

rohan-at-sentry avatar Sep 19 '24 22:09 rohan-at-sentry

Hey, thanks for this feature it's really helpful to track down flaky tests, it would be useful to be able to reset the statistics once a test has been fixed, like the number of commits failed.

It would be nice to see the statistics for all branches at once, to also see which tests are flaky on pull requests, not just on main.

millotp avatar Sep 20 '24 13:09 millotp

@millotp - Thanks for the feedback

It would be nice to see the statistics for all branches at once, to also see which tests are flaky on pull requests, not just on main.

You're right, and we've heard this from others as well.

It's something we're tracking internally, and it's likely we pick it up sometime mid q4 this year. https://github.com/codecov/feedback/issues/516

Re

it would be useful to be able to reset the statistics once a test has been fixed, like the number of commits failed.

We haven't really documented this in our docs just yet (and we should, I'll address that later today), but we "expire" the flaky tag for a test after ~30 days of "non-flake like activity".

Effectively, if a flaky test (that we detected) hasn't failed for 30 days (maybe you fixed, maybe it doesn't flake very often) then we no longer report it as flaky - because it isn't demonstrably slowing your team down. Hopefully that helps

rohan-at-sentry avatar Sep 20 '24 14:09 rohan-at-sentry

Hello, we're interested with my team to migrate to Codecov Test Analytics from Datadog CI, but the main reason for us to want test analytics is to detect flaky tests at the repo level and not at the branch level. We have a periodic reminder to look at the most flaky tests in our repo and fix them. So +1 for https://github.com/codecov/feedback/issues/516

cosmith avatar Sep 20 '24 15:09 cosmith

Okay, I finally got around to getting this implemented, but am running into a bit of an issue:

❌ We are unable to process any of the uploaded JUnit XML files. Please ensure your files are in the right format.

The CI job says it found all 13 reports and uploaded them. I also pasted the XML from one of them into https://lotterfriends.github.io/online-junit-parser and it parsed fine. Is there a way to get more details on what the problem is? It's possible some of the JUnit files from the other CI Job would not have any tests in it, but the junit file is still valid.

EDIT: Here's an excerpt from the junit files if that helps:

<?xml version="1.0"?>
<testsuite tests="116" skipped="0" errors="0" failures="0" time="0.005031332" timestamp="2024-09-29T19:55:54Z" hostname="theStone">
  <testcase file="/home/george/dev/git/athena-framework/athena/src/components/negotiation/spec/base_accept_spec.cr" classname="src.components.negotiation.spec.base_accept_spec" name="BaseAcceptTest parse parameters 3" line="18" time="6.2057e-5"/>
  <testcase file="/home/george/dev/git/athena-framework/athena/src/components/negotiation/spec/base_accept_spec.cr" classname="src.components.negotiation.spec.base_accept_spec" name="BaseAcceptTest parse parameters 1" line="18" time="4.9572e-5"/>
  <testcase file="/home/george/dev/git/athena-framework/athena/src/components/negotiation/spec/base_accept_spec.cr" classname="src.components.negotiation.spec.base_accept_spec" name="BaseAcceptTest parse parameters 0" line="18" time="6.358e-6"/>
  <testcase file="/home/george/dev/git/athena-framework/athena/src/components/negotiation/spec/base_accept_spec.cr" classname="src.components.negotiation.spec.base_accept_spec" name="BaseAcceptTest build parameters string 0" line="7" time="3.832e-6"/>
</testsuite>

EDIT2: I tried making it so the junit.xml files file attribute was relative to the project directory, but that also failed: https://github.com/athena-framework/athena/pull/462#issuecomment-2408378660

Blacksmoke16 avatar Sep 30 '24 13:09 Blacksmoke16