testsPublisResults parameter failOnError: true does not break pipeline for test script error
Hi Colleagues,
We are running Postman collections on Jenkins using the newmanExecute and testsPublishResults Piper commands. Our config:
newmanExecute(
script: this,
newmanCollection: "tests/postman/SpecificationManagement/**/*collection.json",
failOnError: false,
runOptions: [
"run", "{{.NewmanCollection}}",
"--environment", "{{.Config.NewmanEnvironment}}",
"--env-var", "auth_baseurl=${auth_url}",
"--env-var", "srv_baseurl=${spec_srv_url}",
"--env-var", "smServiceUrl=${status_srv_url}",
"--env-var", "configurator=${configurator}", "--env-var", "configuratorpwd=${configuratorpwd}",
"--env-var", "editor=${editor}", "--env-var", "editorpwd=${editorpwd}",
"--env-var", "viewer=${viewer}", "--env-var", "viewerpwd=${viewerpwd}",
"--env-var", "business_service_credentialid=${specmgmt_cred_id}", "--env-var", "business_service_credentialsecret=${specmgmt_cred_secret}",
"--env-var", "saas_app_xsuaa_credentialid=${saas_app_xsuaa_id}", "--env-var", "saas_app_xsuaa_credentialsecret=${saas_app_xsuaa_secret}",
"--env-var", "smClientId=${statusm_cred_id}", "--env-var", "smClientSecret=${statusm_cred_secret}",
"--reporters", "junit,html",
"--reporter-junit-export", "target/newman/{{.CollectionDisplayName}}-report.xml",
"--reporter-html-export", "target/newman/{{.CollectionDisplayName}}-report.html",
"--insecure", "--bail", "--delay-request", "1000"
]
)
testsPublishResults script: this, junit: [pattern: "**/newman/*.xml", updateResults: true, archive: true], failOnError: true
The test script of one of our Postman requests has been throwing an error, but we haven't noticed (until now). The error is present in the result .html and .xml files, but they do not show up on the Jenkins UI directly, looking them up is cumbersome as opposed to the clarity Jenkins displays non-error failures of test cases.
/target/newman/tests_postman_SpecificationManagement_ConfigurePropertyTreeService-report.xml
<testsuites name="ConfigurePropertyTreeService" tests="30" time="13.392">
<testsuite name="Service call to delete PropertyTree_Configurator" id="cf6af549-2617-4d4c-af31-4c652b122138" timestamp="2022-10-03T12:41:06.379Z" tests="2" failures="0" errors="0" time="0.069">
<testcase name="Status code is 200" time="0.069" classname="ConfigurePropertyTreeService"/>
<testcase name="HTTP/1.1 204 No Content is present in response" time="0.069" classname="ConfigurePropertyTreeService"/>
</testsuite>
<testsuite name="Service call to Read Obsolete PropertyTree" id="99d970b2-d5b1-4473-a76d-1d080afc20e7" timestamp="2022-10-03T12:41:06.379Z" tests="1" failures="0" errors="1" time="0.048">
<system-err>
<![CDATA[ Iteration: 0 --- testScriptError: Cannot read properties of undefined (reading 'status_code') --- ]]>
</system-err>
<testcase name="Status code is 200" time="0.048" classname="ConfigurePropertyTreeService"/>
</testsuite>
</testsuites>
When running with newmanExecute failOnError: true the pipeline run is interrupted as expected after that step, however we don't get any reports at all, so finding the errors in general is (again) more difficult than it needs be.
I've tried to search through the the documentation of Piper, bit I wasn't able to find anything that seemed like a more relevant configuration option to the one we were already using. Is there something I am missing?
I've tried reverse engineering what exactly is going on, and found this PR introducing the flag, and relying on the implementation of hudson.tasks.junit.TestResultsAction.getFailCount() documented here and implemented somewhere around here. I couldn't figure out the exact path this code would take in my case.
From an API design perspective, I would expect for the two failOnError flags of the newmanExecute and testsPublishResults. Also, from a semantic perspective I would expect the flag named failOnError to break my build on any test error it finds, not just on failures, but errors as well. Was this the original intent of the parameter @CCFenner?
If I'm right and this is a problem in general, we should strongly consider looking into this as we might be missing numerous builds that should have been broken. Probably people don't expect to have to look on the Jenkins UI for test failures, but also having to check the result .xml file for the errors.
Looking forward to any feedback.