action-junit-report icon indicating copy to clipboard operation
action-junit-report copied to clipboard

JUnit report sent to wrong build

Open jawnsy opened this issue 3 years ago • 33 comments

Dear maintainer,

Thanks so much for creating this excellent plugin and sharing it for others to use!

I have multiple workflows defined, and it seems that the check is added to the first workflow, rather than the one that actually ran the test suite. Is there some way to associate the report with the correct workflow definition? Ideally I'd like the report to be below the test suite (e.g. job that runs tests, publishes report, should have the report appended as a check below it or nearby)

jawnsy avatar Feb 11 '21 19:02 jawnsy

Hi @jawnsy

This is sadly a problem of GitHub, which you can find more about here: https://github.community/t/github-actions-status-checks-created-on-incorrect-check-suite-id/16685

Hoping them providing a fix for that soon

mikepenz avatar Feb 11 '21 19:02 mikepenz

Hi there- unfortunately, this issue makes this Action pretty unusable for us since the check run output is the only place our test suite failures are visible and when they get added to some other "random" Workflow, they're impossible to find.

I waded through some of the related issues and there seems to be a potential workaround:

JUnit reporter github action should update existing check instead of creating new one.

Get check which triggered action: https://docs.github.com/en/rest/reference/checks#get-a-check-run

Update this check with provided information about test results: https://docs.github.com/en/rest/reference/checks#update-a-check-run

If I understand correctly, this would add all of your annotations to the actual Job where I'm using the step (our tests), instead of as a new check that is getting misplaced in some other Workflow.

Besides being a workaround for this issue, this is actually exactly where I was trying to find these annotations to begin with. I think it's a much more useful place to put them. Even if the check was showing up under the correct Workflow, I think users would still be looking for failures in the existing, failed "Test Job" check and not some separate "JUnit Reporter" check (I know we could configure the name to be more obvious, but still).

Am I missing anything? Would you consider implementing such functionality?

pbrisbin avatar Nov 18 '21 15:11 pbrisbin

@pbrisbin thank you very much for the added notes.

It sure sounds like an opportunity to use those APIs and make it so it will only update the current run. Would need to look into it. Do you maybe know an action already using this approach?

mikepenz avatar Nov 18 '21 18:11 mikepenz

Hmm, I can't say that I do. A lot of Actions add annotations to the Job's check where they're run, but they do it by outputting messages with a problem-matcher attached, not by updating the Check. I suppose that's another alternative you could consider.

Looking at the docs a bit more closely, I think it would look something like,

const ref = head_sha
const check_name = github.context.job // assumes check names are job.<job-id>
const filter = 'latest'

const checks = await octokit.rest.checks.listForRef({
  ...github.context.repo
  ref,
  check_name,
  filter
});

// assumes a check for our job exists if we're running, probably want some error-handling
const check_run_id = checks[0].run_id

// as per docs: "Each time you update the check run, annotations are 
// appended to the list of annotations that already exist for the check run."
oktokit.rest.checks.update({ owner, repo, check_run_id, ... })

Oh, and I noticed you're currently only sending your first 50 annotations. I suppose it's reasonable to expect a test suite in a to never have more than 50 test failures in a given run, but you could send everything if you call this .update() multiple times, each with fewer than 50.

The Checks API limits the number of annotations to a maximum of 50 per API request. To create more than 50 annotations, you have to make multiple requests to the Update a check run endpoint. Each time you update the check run, annotations are appended to the list of annotations that already exist for the check run

pbrisbin avatar Nov 19 '21 14:11 pbrisbin

@pbrisbin there's a version coming up using the outlined APIs. Sadly those lead to a different result than anticipated, and due to the check still being running at the time of updating the result, it won't keep the failure state for example.

Just for reference, there is still no API to create a check in a specific suite / workflow sadly:

https://github.community/t/specify-check-suite-when-creating-a-checkrun/118380/9 https://github.community/t/associate-a-check-run-with-a-workflow/122540 https://github.community/t/associating-a-check-run-with-a-specific-action-run/126682

mikepenz avatar Nov 21 '21 15:11 mikepenz

Awesome!

due to the check still being running at the time of updating the result, it won't keep the failure state for example

Let me make sure I understand...

So, if I use this in a app-test job for example, it runs tests then does the JUnit thing, the Job will pass/fail based on the tests and not the JUnit processing? That seems... desired? I kind of think the only reason the JUnit action needs to control the failure at all is because it's doing that separate-check thing. When it's part of the test's check, the tests setting the check result just makes sense (to me).

Am I missing something?

pbrisbin avatar Nov 22 '21 17:11 pbrisbin

@pbrisbin you are actually right, that particular thing may not be an issue in this case.

One thing seems to be a problem though. The way the annotations are rendered if we use the update approach: Screenshot 2021-11-22 at 21 58 46

vs.

Screenshot 2021-11-22 at 21 59 28

https://github.com/mikepenz/action-junit-report/runs/4279043601

mikepenz avatar Nov 22 '21 20:11 mikepenz

Wow, well that's super annoying. I wonder why that is :thinking:

pbrisbin avatar Nov 22 '21 21:11 pbrisbin

I might be grasping at straws, but in your PR, it looks like the Update sets { output.conclusion } while the Create sets { conclusion, output.* }. Is it possible that matters?

pbrisbin avatar Nov 22 '21 21:11 pbrisbin

@pbrisbin sadly this wasn't it, but this makes the build to fail, while it's still ongoing, breaking a few things :D https://github.com/mikepenz/action-junit-report/actions/runs/1492172809 After it's done, it will change it's state to successful in this particular build

mikepenz avatar Nov 22 '21 21:11 mikepenz

Ha, not unexpected in hindsight. I would say that when doing the update path you don't want conclusion at all -- you aren't intending to conclude it right?

pbrisbin avatar Nov 22 '21 21:11 pbrisbin

I wonder if the test's conclusion/output is somehow taking over as the "important one". So your annotations remain, but they get subdued in the UI once the test completes?

Either way, totally annoying. And sorry if this ends up a dead-end.

pbrisbin avatar Nov 22 '21 21:11 pbrisbin

@pbrisbin I feel somehow that this is potentially an issue with the github API. the documentation itself would not indicate any behavior difference as far as I found briefly: https://docs.github.com/en/rest/reference/checks#update-a-check-run

Thank you for also looking into it. Really appreciated!

mikepenz avatar Nov 22 '21 21:11 mikepenz

Yeah I agree. Looking at the API docs between Create and Update the POST bodies are identical, you would expect the result to be the same too.

pbrisbin avatar Nov 22 '21 21:11 pbrisbin

There seems to be more functionality that doesn't work for the update vs create:

I don't see the annotations appear in the diff. Unfortunately, I think not seeing the annotations in-line is a bridge too far for us to use this style.

I also notice here that the Process completed with exit code 1. annotation actually includes the Job/Check name (tts-test), but the others do not -- as if they're attached to the Workflow but not any one Job/Check. I wonder if that's somehow related.

pbrisbin avatar Nov 29 '21 15:11 pbrisbin

@pbrisbin that's unfortunate :/

I haven't had the chance to check if there have been other reports on the update endpoint behaving differently, or if there has anything been mentioned on the forums.

mikepenz avatar Nov 29 '21 16:11 mikepenz

Oh, that may actually be our fault. I need to investigate where file/line information is meant to live in a JUnit XML. Our formatter seems to just have it as part of the failure message, but that's probably not right.

Notice the "Line 1 in {spec description}", even though {file}:{line} is there as the first part of the message.

pbrisbin avatar Nov 29 '21 18:11 pbrisbin

@pbrisbin sadly it seems line numbers are not consistent between different output formats. Fun enough there was a feature request today to add support for handling the line attribute in the report.xml:

  • https://github.com/mikepenz/action-junit-report/pull/430

There surely is some way to expand support for handling it. It should already handle retrieving file numbers from file paths as you show in your screenshot: https://github.com/mikepenz/action-junit-report/blob/main/src/testParser.ts#L58-L60

maybe if you have a sample output we. can use it to expand test cases and see why it does not work for your case

mikepenz avatar Nov 29 '21 18:11 mikepenz

Oh, that's convenient!

Here is an example of what it is right now: golden.xml

But we author this formatter, so I can pretty easily add attributes.{file,line}, which it looks like would get picked up before any inferring is done.

pbrisbin avatar Nov 29 '21 18:11 pbrisbin

I didn't realize. In this case I'd propose to add the line attribute :) for direct support.

And thanks for providing the xml. I see, so we can't parse it from the description. if it was part of the file, it would parse it already

mikepenz avatar Nov 29 '21 21:11 mikepenz

So you are saying that if we added file="{file}:{line}" that would also work?

line numbers are not consistent between different output formats

And there isn't any one way to do that is more common or standardized than any other?

pbrisbin avatar Nov 29 '21 21:11 pbrisbin

I believe using the line attribute is the most stable form, as it's shown in this junit format xunit export: https://github.com/mikepenz/action-junit-report/blob/main/test_results/xunit/report.xml

In cases neither is available (line via the file, or via the line attribute) the action tries to do some more searching to find the the line number in the stacktrace: https://github.com/mikepenz/action-junit-report/blob/main/src/testParser.ts#L53-L55

I'll check again and see why it wouldn't match the description in your case as it should be the stacktrace.


Did some search but the xsd for junit output for example does not specify line. so it seems generally this is an extension to the spec.

mikepenz avatar Nov 30 '21 07:11 mikepenz

@mikepenz Hi! Thank you for developing this nice plugins.

I am looking for a way to solve this issue. Is there any progress or workaround?

koyama-yoshihito avatar Jul 20 '22 08:07 koyama-yoshihito

@koyama-yoshihito as of this discussion here, the issue is still not fixed by GitHub: https://github.com/github-community/community/discussions/14891

mikepenz avatar Jul 20 '22 08:07 mikepenz

@mikepenz We just keep watching this discussion. Thanks for info.

koyama-yoshihito avatar Jul 20 '22 10:07 koyama-yoshihito

Could this feature help with the issue: https://github.blog/2022-05-09-supercharging-github-actions-with-job-summaries?

laughedelic avatar Jul 20 '22 15:07 laughedelic

@laughedelic thank you for the link. The newest version of this action does publish a job summary.

While it seems to be a partial solution I think it's not fully covering the whole element.

Sample display: https://github.com/mikepenz/action-junit-report/actions/runs/2676469277

Sadly this would loose the great feature of seeing the test results within the reported checks:

Screenshot 2022-07-20 at 18 09 49

We could add a flag to offer disabling the report being sent and only having a summary - for people who'd prefer that.

mikepenz avatar Jul 20 '22 16:07 mikepenz

Glad to know that it's already in use! 👏

I think the summary in the checks list is a very nice feature, but unfortunately, when there are a lot of tests ran in parallel, the number of check summaries is doubled, so it would be good to either be able to disable it, or consolidate all summaries by some tag (as suggested in https://github.com/mikepenz/action-junit-report/issues/195).

laughedelic avatar Jul 20 '22 19:07 laughedelic

@laughedelic without offering a configuration where multiple imports can be done at once, I feel that the current summary APi exposed by the core library of GitHub won't offer that possibility yet.

But given a sample summary of this: https://github.com/mikepenz/action-junit-report/actions/runs/2717236074#summary-7464569272 it seems like it's not too bad.

What also is possible already today is to enable the annotate_only mode which will not create a check run anymore. Meaning only the summary will be created.

mikepenz avatar Jul 22 '22 08:07 mikepenz

Btw. regarding multiple imports as one. Please see my comment here: https://github.com/mikepenz/action-junit-report/issues/195#issuecomment-1192407403 with the current (in-progress) PR

mikepenz avatar Jul 22 '22 10:07 mikepenz