unity-test-runner
unity-test-runner copied to clipboard
Only 65535 characters are allowed
Tests pass, but the github action fails due to too many character (I'm not sure what exactly is attempting to be done).
I suspect this is due to me running over 1700 tests. Is there some way for me to get the action to pass properly?
Last lines of test run:
Processing file playmode-results.xml...
Trying to open artifacts/playmode-results.xml
File artifacts/playmode-results.xml parsed...
Start analyzing results: playmode-results.xml
✅ playmode-results.xml - 1721/1721 - Passed in 1353.219s
=================
Analyze result:
✅ Test Results - 1721/1721 - Passed in 1353.219s
Posting results for cdd86d6db71a1e977b579f30a8c8117bf780e8da
Error: Invalid request.
Only 65535 characters are allowed; 233873 were supplied.
[Edit] Forgot to include my yaml:
name: Actions
on: [push, pull_request]
jobs:
test:
name: Build my project
runs-on: ubuntu-latest
steps:
# Checkout
- name: Checkout repository
uses: actions/checkout@v2
# Test
- name: Run tests
uses: game-ci/unity-test-runner@v2
env:
UNITY_LICENSE: ${{ secrets.UNITY_LICENSE_2019_4_29F1_UBUNTU }}
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
projectPath: ProtoPromise_Unity
unityVersion: 2019.4.29f1
testMode: PlayMode
- uses: actions/upload-artifact@v2
if: always()
with:
name: Test results
path: artifacts
That's a lot of tests!
The Unity Test Runner actions uses the test results to generate markup and display it using the GitHub Checks API. It looks like the large number of tests you have is causing too large a request to be sent to the Checks API, and they are rejecting it. I did find this related issue: https://github.community/t/undocumented-65535-character-limit-on-requests/117564
The easiest workaround would be to remove githubToken: ${{ secrets.GITHUB_TOKEN }}, so that the Checks API is not called. But then you wouldn't be able to see the test results in the UI. You could upload the test results artifact as mentioned here in the GameCI docs, or you could try using a different action to display the results (though I expect they will have the same error):
- uses: MirrorNG/[email protected]
if: always()
with:
path: ${{ steps.test.outputs.artifactsPath }}/*.xml
access-token: ${{ secrets.GITHUB_TOKEN }}
Alternatively, maybe there's a way to split the results into smaller pieces, so that multiple smaller calls could be made to the Checks API?
I already have the store artifacts step after the test run. I'm actually not too concerned about the UI for the tests, I just want it to report success/fail, then I can manually inspect the test results artifact if needed. Is there a configuration option for that?
[Edit] I just realize that's exactly what removing the github token does facepalm. But yeah, it would be nice to be able to have the UI work for a large number of tests.
Would it be possible to parse the results and only upload the failed tests? Typically those are the only ones I care about, and failures should typically be very few to not go past that limit.
I think most people like seeing the green checkmarks for each passing test.
But we could potentially add an option (maybe checkFailedOnly?) that could be set to true to filter the results.
Feel free to raise a PR if you think this is something you could add.
I'd love to, but I'm not too familiar with js or github actions, so I wouldn't know where to start. I also don't have that much time right now, but maybe I could dig into it in a few weeks or so if no-one else does by then.
I actually found https://github.com/marketplace/actions/test-reporter to solve this issue. It doesn't natively support the XML results that Unity outputs yet, but that's apparently in the works, and a workaround was posted by a user. So I will close this.
I actually found https://github.com/marketplace/actions/test-reporter to solve this issue. It doesn't natively support the XML results that Unity outputs yet, but that's apparently in the works, and a workaround was posted by a user. So I will close this.
Should this bug/feature request be reopened? I suspect that supporting test suites with more tests / more testoutput would be a good feature for many people!
The workaround using test-reporter ~~does not work for me, but fails with a stackTrace.split is not a function error (as it does for people in that thread).~~
That workaround does work if setting reporter: java-junit (rather than jest-junit). Can't believe I read over that / sorry for any confusion!
Sure!
I created an enhancement request with what I think may be the best solution to this issue: https://github.com/game-ci/unity-test-runner/issues/169
Should this bug/feature request be reopened? I suspect that supporting test suites with more tests / more testoutput would be a good feature for many people!
The workaround using test-reporter does not work for me, but fails with a
stackTrace.split is not a functionerror (as it does for people in that thread).
Changing the jest-junit to java-junit works for me.
# Workaround for NUnit XML (see https://github.com/dorny/test-reporter/issues/98#issuecomment-867106931)
- name: Install NUnit
if: always()
run: |
nuget install NUnit.Console -Version 3.12.0
- name: Fetch transform code
if: always()
run: |
wget https://raw.githubusercontent.com/nunit/nunit-transforms/master/nunit3-junit/nunit3-junit.xslt
shell: bash
- name: Transform NUnit3 to JUnit
if: always()
run: |
Get-ChildItem . -Filter artifacts/*.xml | Foreach-Object {
$xml = Resolve-Path $_.FullName
$output = Join-Path ($pwd) ($_.BaseName + '_junit.xml')
$xslt = New-Object System.Xml.Xsl.XslCompiledTransform;
$xslt.Load("nunit3-junit.xslt");
$xslt.Transform($xml, $output);
}
shell: pwsh
- uses: dorny/test-reporter@v1
if: always()
with:
name: "unity-test-results-${{ matrix.buildTarget }}-${{ matrix.scriptingRuntime }}-${{ matrix.mode }}-${{ matrix.progress }}-${{ matrix.pooling }}"
path: "*_junit.xml"
reporter: java-junit
- uses: actions/upload-artifact@v2
if: always()
with:
name: unity-test-results-${{ matrix.buildTarget }}-${{ matrix.scriptingRuntime }}-${{ matrix.mode }}-${{ matrix.progress }}-${{ matrix.pooling }}
path: artifacts
Though, like someone else mentioned, it does lack the stack traces, but I can get that from the artifacts when I need it.
I like the idea of having different options to display the test results, so maybe we could even make a new compound action does the NUnit to JUnit transformation and then uses dorny/test-reporter to display the test results?
It would mean extra work + maintenance, so I'm not going to push for the idea, but I thought I'd at least suggest the possibility.
It would be better if dorny/test-reporter just supported NUnit XML natively, but it seems dorny never got around to it. I posted in that issue asking about it. I think PRs will also be accepted if anyone wants to add the support there if dorny doesn't have the time.
Or if unity's test runner could offer extra test result format options, but that's outside the purview of this project (though someone did ask about it on the forums https://forum.unity.com/threads/unity-test-framework-2-0-ready-for-feedback.1230126/#post-7884664).
For me, skipped/inconclusive tests were simply counted as passed. As a very ugly hacky workaround, I added the following to my yml right after the Transform NUnit3 to JUnit step:
- name: Add "skipped" tags.
run: |
sed -i "s/\(<testcase.* status=\".nconclusive\".*>\)/\1 <skipped\/>/g" *_junit.xml
shell: bash
edit: sorry if this is out of scope for this discussion!
Instead of dorny/test-reporter using EnricoMi/publish-unit-test-result-action worked better for me. Usage is very similar.
Instead of dorny/test-reporter using EnricoMi/publish-unit-test-result-action worked better for me. Usage is very similar.
It looks like that only officially supports JUnit XML, not NUnit XML. Do you have to convert it?
Yes, you also have to convert it. I used your code above. https://github.com/ReactUnity/core/blob/main/.github/workflows/test.yml#L108
Hm, looks like EnricoMi/publish-unit-test-result-action added NUnit XML support back in June. I'll have to try it out. It may be worth integrating that.
[Edit] Looks like it only works on Linux, not Windows.
@timcassell thanks for the info. I could remove some boilerplate from my workflow file.