Criterion
Criterion copied to clipboard
Support writing unique test log files given format string in CRITERION_OUTPUTS
Hi,
(I tried searching to see if this was reported before, but could not find anything, so please refer me to the appropriate issue if it already exists.)
I'm trying to use Criterion as a unit test framework together with the meson build system. I would like to get JUnit XML output from the tests when running in CI and use those reports with a JUnit parser/report step. As far as I can see, my only current option is to add --xml=<testname>.xml
to my test()
calls in meson.
What I'd like instead is to set CRITERION_OUTPUTS=xml:TEST-%s.xml
when run in CI and let Criterion replace %s
with test suite name or similarly unique name per test executable. I know the cmocka framework supports something like this: https://api.cmocka.org/index.html.
Interesting, I'd have thought that most test result parsing tools would be able to slice off individual tests from the aggregated XML.
Also, just to be clear, this is for integrating with some CI test reporter that knows how to read JUnit XML files, rather than integrating with meson itself, right? The latter can be achieved with --tap and adding protocol: tap
to test(), but I think you're looking for the former.
In any case, you're right that outputting one XML per test isn't currently supported.
I created a simple example project, just so we have a common reference to discuss: https://github.com/thomasstenersen/criterion-xml-example. I'm using Criterion v2.4.1, by the way.
Also, just to be clear, this is for integrating with some CI test reporter that knows how to read JUnit XML files, rather than integrating with meson itself, right?
Yes. I've found that the XML output from meson doesn't conform well to what the parsers expect. Compare the XML from meson:
<?xml version="1.0" encoding="utf-8"?>
<testsuites tests="1" errors="0" failures="1">
<testsuite name="criterion-xml-example" tests="1" errors="0" failures="1" skipped="0" time="0.006365776062011719">
<testcase name="Example test" classname="criterion-xml-example" time="0.006365776062011719">
<failure/>
<system-err>[----] ../test/src/example_test.c:6: Assertion Failed
[----]
[----] The expression (2) == (4) is false.
[----]
[FAIL] example_test::test_fail: (0,00s)
[====] Synthesis: Tested: 1 | Passing: 0 | Failing: 1 | Crashing: 0</system-err>
</testcase>
</testsuite>
</testsuites>
With the XML from Criterion:
<?xml version="1.0" encoding="UTF-8"?>
<!-- Tests compiled with Criterion v2.4.1 -->
<testsuites name="Criterion Tests" tests="1" failures="1" errors="0" disabled="0">
<testsuite name="example_test" tests="1" failures="1" errors="0" disabled="0" skipped="0" time="0.000">
<testcase name="test_fail" assertions="1" status="FAILED" time="0.000">
<failure type="assert" message="1 assertion(s) failed."><![CDATA[../test/src/example_test.c:6: The expression (2) == (4) is false.]]>
</failure>
</testcase>
</testsuite>
</testsuites>
So my use case is more that I have a bunch of test()
s defined that I'd like to run in a CI environment and collect the XML results to picked up by some parser/reporter tool. E.g., https://github.com/EnricoMi/publish-unit-test-result-action or similar. I'm not really interested in the XML files when running locally. It's not a big issue, though, I worked around this by doing something similar to
test_exes = [
executable(...),
executable(...),
]
foreach test_exe : test_exes
test(test_exe.name(), test_exe, args: ['--xml=@[email protected]'.format(test_exe.name())])
endforeach
I may have misunderstood something, but I think the --output
/CRITERION_OUTPUTS
flag set to a directory could help here (#259):
--output=PROVIDER:PATH: Write a test report to PATH using the output
provider named by PROVIDER.
If PATH is an existing directory, the report will be created in that
directory. The file will be named after the binary.
For example: test_foo --> test_foo.xml
If the file already exists, Criterion will pick a different name
(test_foo_1.xml, test_foo_2.xml) to avoid overwriting it.
Oh, I completely forgot that feature. I'd expect this would work, then?
Yeah, this seems to do the trick :+1: Great stuff. Though I could not see this in the documentation anywhere, am I looking the wrong places?
https://criterion.readthedocs.io/en/v2.4.1/env.html#command-line-arguments
$ ./test/example_test --help
Tests compiled with Criterion v2.4.1
usage: ./test/example_test OPTIONS
options:
-h or --help: prints this message
-q or --quiet: disables all logging
-v or --version: prints the version of criterion these tests have been linked against
-l or --list: prints all the tests in a list
-jN or --jobs N: use N concurrent jobs
-f or --fail-fast: exit after the first failure
--color=<auto|always|never>: colorize the output
--encoding=<ENCODING>: use the specified encoding for the output (default: locale-deduced)
--ascii: don't use fancy unicode symbols or colors in the output
-S or --short-filename: only display the base name of the source file on a failure
--filter [PATTERN]: run tests matching the given pattern
--timeout [TIMEOUT]: set a timeout (in seconds) for all tests
--tap[=FILE]: writes TAP report in FILE (no file or "-" means stderr)
--xml[=FILE]: writes XML report in FILE (no file or "-" means stderr)
--json[=FILE]: writes JSON report in FILE (no file or "-" means stderr)
--always-succeed: always exit with 0
--verbose[=level]: sets verbosity to level (1 by default)
--crash: crash failing assertions rather than aborting (for debugging purposes)
--debug[=TYPE]: run tests with a debugging server, listening on localhost:1234 by default. TYPE may be gdb, lldb, or wingbd.
--debug-transport=VAL: the transport to use by the debugging server. `tcp:1234` by default
--full-stats: Tests must fully report statistics (causes massive slowdown for large number of assertions but is more accurate).
--ignore-warnings: Ignore warnings, do not exit with a non-zero exit status.
-OP:F or --output=PROVIDER=FILE: write test report to FILE using the specified provider
You're right, the documentation of this feature is completely missing.
I will make up for it soon.
No worries, nice feature anyway. @Snaipe, I'm fine with closing the issue unless you'd like to keep it open until docs are added.
I'm fine with keeping it open to track the documentation bug.