Updated rstrnt-report output
Pull Request Checklist
- [x] implement the feature
- [ ] write the documentation
- [x] extend the test coverage
- [ ] mention the version
- [ ] include a release note
I've implemented the following changes to accommodate RHEL QE's desire for more granularity of output from tests which utilise the BASH script backwards compatibility for restraint and rhts.
With this implementation, the previous functionality remains as is for unchanged test.fmf files. That is to say that for each call to the {tmt|rstrnt|rhts}-report-result script, the result supplied as an argument to the script will be output to a file and at the end of the test execution the results will be read and an overall result will be returned from the test based on the highest value in the hierarchy: skip=1, pass=2, warn=3, fail=4.
For test.fmf files which are updated to include 'result: custom', each call to the {tmt|rstrnt|rhts}-report-result script will be treated as a new test, and output to tmt as such, with the corresponding result supplied to the script when called.
I've updated the existing tests to account for the change from the report-result file to the results.yaml file, and also the properties contained within the results.yaml file.
Test locally using tmt 1.30.0:
rhts_custom_results.txt rhts_regular_results.txt restraint_regular_results.txt restraint_custom_results.txt tmt_regular_test_pass.txt tmt_tests_pass.txt
Submitted to cover tmt #2086
Sorry, closed accidentally. Reopening
@pfdaly thank you for the commit! Do you have an example of results.yaml ?
In the report output it's a bit hard to read sub-tests results because there is no hierarchy: skip /test_3/test/skip_2 skip /test_3/test/skip_3 skip /test_4/test/skip_1 pass /test_4/test/skip_2 skip /test_4/test/skip_3
Instead may be something like below could be more readable.
/test_3
skip /test/skip_2
skip /test/skip_3
/test_4
skip /test/skip_1
pass /test/skip_2
skip /test/skip_3
Have not seen results.yaml yet. Just a note that the same test hierarchy would be helpful when feeding results to other tools in the pipeline.
@pfdaly thank you for the commit! Do you have an example of results.yaml ?
Indeed, an example of how the file produced by tmt-report-result looks like would be nice. I did miss it while reading the patch, but now that I see it's called results.yaml, I have a couple of questions:
- is it following the result specification, https://tmt.readthedocs.io/en/stable/spec/plans.html#results-format ?
- if the file is a YAML file, why not read it as such, instead of crude
for line in ...&if "result:" in line? We could useyaml_to_dict, and process the content. - if the file does follow the result specification, I'd suggest not specifying all the fields that are added, namely
fmf_id,data-path, andlog. tmt should be in a better position to set these fields, e.g. your method of creatingfmf_iddoes not account for URL or ref. If it does not set them for results loaded fromtmt-report-results, I'd consider that a bug. - I noticed several fields are encoded in
note, namely$server,$port,$METRIC, and$disablePlugin, which complicates their use later. One needs to remember the order, and syntax of the field, and parse the string. Maybe we could add explicit fields to hold these values. - Do we even need those fields? The current tmt seems to ignore them completely. If the goal is to provide restraint-compatible CLI, that does not necessarily mean we should save everything a restraint-compatible test script gives us, e.g.
serverandportare Beaker lab location tmt does not propagate. If the goal is to provide a restraint-compatible file, then the move to YAML would break this goal anyway.
This is the current contents of the results.yaml file for each test. I had followed the result specification. I only included the properties which I thought were relevant, but can certainly trim the unneeded ones.
root@fedora:~/2086# cat /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test-1/data/results.yaml
- name: /test/good
fmf_id:
name: /test/good
result: pass
note:
log:
- /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test-1/data/test_good_pass_log
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test-1/data
- name: /test/weird
fmf_id:
name: /test/weird
result: warn
note:
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test-1/data
- name: /test/bad
fmf_id:
name: /test/bad
result: fail
note:
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test-1/data
root@fedora:~/2086#
root@fedora:~/2086# cat /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_2-2/data/results.yaml
- name: /test/good_1
fmf_id:
name: /test/good_1
result: pass
note: Example output message. server=http://test-example.com port=55 disablePlugin=avc
log:
- /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_2-2/data/test_good_1_pass_log
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_2-2/data
- name: /test/good_2
fmf_id:
name: /test/good_2
result: pass
note:
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_2-2/data
- name: /test/good_3
fmf_id:
name: /test/good_3
result: pass
note:
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_2-2/data
root@fedora:~/2086#
root@fedora:~/2086# cat /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_3-3/data/results.yaml
- name: /test/skip_1
fmf_id:
name: /test/skip_1
result: skip
note:
log:
- /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_3-3/data/test_skip_1_skip_log
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_3-3/data
- name: /test/skip_2
fmf_id:
name: /test/skip_2
result: skip
note:
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_3-3/data
- name: /test/skip_3
fmf_id:
name: /test/skip_3
result: skip
note:
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_3-3/data
root@fedora:~/2086#
root@fedora:~/2086# cat /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_4-4/data/results.yaml
- name: /test/skip_1
fmf_id:
name: /test/skip_1
result: skip
note:
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_4-4/data
- name: /test/skip_2
fmf_id:
name: /test/skip_2
result: pass
note:
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_4-4/data
- name: /test/skip_3
fmf_id:
name: /test/skip_3
result: skip
note:
data-path: /var/tmp/tmt/run-211/plan/execute/data/guest/default-0/test_4-4/data
root@fedora:~/2086#
I attempted to replace the crude filtering with a call to yaml_to_dict as follows:
result_list = tmt.utils.yaml_to_dict(self.read(report_result_path)).get("result")
But was met with the following error message on execution:
fail: Expected dictionary in yaml data, got 'CommentedSeq'.
Reading report_result_path isn't a problem, but it's obviously not the expected type.
If I use a yaml linting tool to check the results.yaml content I receive no errors. However, on reading the specification I see the following line, 'Results are saved as a single list of dictionaries, each describing a single test result.', but surely that relates to the json format and not the yaml format. The examples on https://tmt.readthedocs.io/en/stable/spec/plans.html#results-format don't appear to be formatted any different to what I've constructed in results.yaml
'CommentedSeq' is not a type with which I've previous experience.
As for the values I compiled into 'note', I don't believe they're necessary, so if those options and arguments are supplied to the BASH script I could just dispose of them.
This is the current contents of the results.yaml file for each test. I had followed the result specification.
+1!
I only included the properties which I thought were relevant, but can certainly trim the unneeded ones.
Yeah, I'd do exactly that, exactly when it comes to fields "owned" by the execute step - fmf_id, data-path, tmt can and should fill these to each and every result it encounters, and can do it better than the shell script.
I attempted to replace the crude filtering with a call to yaml_to_dict as follows:
result_list = tmt.utils.yaml_to_dict(self.read(report_result_path)).get("result")But was met with the following error message on execution:
fail: Expected dictionary in yaml data, got 'CommentedSeq'.Reading report_result_path isn't a problem, but it's obviously not the expected type. If I use a yaml linting tool to check the results.yaml content I receive no errors. However, on reading the specification I see the following line, 'Results are saved as a single list of dictionaries, each describing a single test result.', but surely that relates to the json format and not the yaml format. The examples on https://tmt.readthedocs.io/en/stable/spec/plans.html#results-format don't appear to be formatted any different to what I've constructed in results.yaml
That line relates to the YAML form as well (YAML being a superset of JSON, whatever we expect in the JSON file must be true for the YAML one too), it does contain a list of dictionaries.
The error message is correct, the file contains a sequence (CommentedSeq is a custom list-like class, it stores comments from the YAML file side by side with the actual data) - try tmt.utils.yaml_to_list, result_list[0].get('result') should then work.
As for the values I compiled into 'note', I don't believe they're necessary, so if those options and arguments are supplied to the BASH script I could just dispose of them.
I would say we don't need them, but I'm no expert. @psss @lukaszachy @thrix do you expect trouble if we don't save server=http://test-example.com port=55 disablePlugin=avc in tmt-report-results file?
I would say we don't need them, but I'm no expert. @psss @lukaszachy @thrix do you expect trouble if we don't save
server=http://test-example.com port=55 disablePlugin=avcintmt-report-resultsfile?
Lets leave them out and, if someone needs them, we will add such functionality. If I read correctly one can disable report plugin this way (https://restraint.readthedocs.io/en/latest/commands.html#cmdoption-p)
Apologies for late feedback. This needs documentation change and release note as well, IMO.
Otherwise LGTM.
@juk, could you please review and test the proposed changes to ensure it covers your use case well? Thanks.
@juk, could you please review and test the proposed changes to ensure it covers your use case well? Thanks.
@psss, I've tried the change with a few kernel tests and getting the same error message when using "result: custom" But I may be missing something.
execute
queued execute task #1: default-0 on default-0
execute task #1: default-0 on default-0
how: tmt
00:00:04 errr /livepatch/selftests/tests/selftests (on default-0) (custom results file not found in '/var/tmp/tmt/run-010/livepatch/selftests/plans/selftests/execute/data/guest/default-0/livepatch/selftests/tests/selftests-1/data') [1/1]
summary: 1 test executed
report
how: display
errr /livepatch/selftests/tests/selftests (custom results file not found in '/var/tmp/tmt/run-010/livepatch/selftests/plans/selftests/execute/data/guest/default-0/livepatch/selftests/tests/selftests-1/data')
summary: 1 error
finish
summary: 0 tasks completed
total: 1 error
@juk, could you please review and test the proposed changes to ensure it covers your use case well? Thanks.
@psss, I've tried the change with a few kernel tests and getting the same error message when using "result: custom" But I may be missing something. Was an issue in my environment. With some modifications was able to livepatch/selftests.
@pfdaly thank you so much for working on implementation.
I observed similar behavior with logs as @psss mentioned above.
My test reports 6 logs with "rstrnt-report-result -o". Logs were not captured with both "result: custom" and regular results
Regular results: "report -vv"
execute
queued execute task #1: default-0 on default-0
execute task #1: default-0 on default-0
how: tmt
progress:
summary: 1 test executed
report
how: display
pass /livepatch/selftests/tests/selftests
output.txt: /var/tmp/tmt/run-021/livepatch/selftests/plans/selftests/execute/data/guest/default-0/livepatch/selftests/tests/selftests-1/output.txt
summary: 1 test passed
finish
summary: 0 tasks completed
total: 1 test passed
Custom result: "report -vv"
execute
queued execute task #1: default-0 on default-0
execute task #1: default-0 on default-0
how: tmt
progress:
summary: 6 tests executed
report
how: display
pass /livepatch/selftests/tests/selftests/test-callbacks.sh
pass /livepatch/selftests/tests/selftests/test-ftrace.sh
pass /livepatch/selftests/tests/selftests/test-livepatch.sh
pass /livepatch/selftests/tests/selftests/test-shadow-vars.sh
pass /livepatch/selftests/tests/selftests/test-state.sh
pass /livepatch/selftests/tests/selftests/test-sysfs.sh
summary: 6 tests passed
finish
summary: 0 tasks completed
total: 6 tests passed
example of logs submission in the test:
:: [ PASS ] :: Test test-sysfs.sh
+ echo /mnt/testarea/test-sysfs_result.log
/mnt/testarea/test-sysfs_result.log
+ rstrnt-report-result -o /mnt/testarea/test-sysfs_result.log /test-sysfs.sh PASS
+ echo -e '\n::::::::::::::::'
@pfdaly with the method proposed, we may be missing a few things in the results.yaml for a test plan. Not sure if it's intended to work this way, though.
Would it be possible to add the below information into the results.yaml per test?
- start time / end time or duration
- fmf id
- logs, including output.txt for the entire test (parent test) (includes output of all subtests/phases)
@pfdaly with the method proposed, we may be missing a few things in the results.yaml for a test plan. Not sure if it's intended to work this way, though.
Some of it is on purpose - tmt owns some pieces of the puzzle, and is in better position to add them when results.yaml is eventually submitted to it. From your list, it's namely fmf ID - tmt-report-results would need to do some non-trivial guessing to get the right answer while tmt can fill in blanks with ease. See also https://github.com/teemtee/tmt/pull/2689#issuecomment-1964566852, some of the items were discussed there.
Would it be possible to add the below information into the results.yaml per test?
- start time / end time or duration
Per test 100 %, for the whole test tmt should and will overwrite whatever tmt-report-results saves because tmt measures the duration and can enforce the correctness of these two fields.
- fmf id
- logs, including output.txt for the entire test (parent test) (includes output of all subtests/phases)
I mentioned in the comment above that tmt-report-results should not save the log key, but it does make sense for subtest logs. I would try to stay away from the parent tests "main log" aka output.txt though. Unless you have a really good reason to override it, tmt is capturing the output of the whole test, saves it, and will add it to this test automatically, again, protecting the correctness of the output it may observe.
It was decided to push this PR out to 1.34 where we plan to focus on subresults.
I believe I've implemented everything requested in the comments.
The results.yaml file now additionally contains:
- The path to the log file supplied to tmt-report-result
- The data path
- The end time (time at which the tmt-report-result script was called)
There was a request to add the starttime, but that's a bit more complex. It could be set to the time when the parent test starts, or the end-time of the previous execution could be saved and used as the start-time the next time tmt-report-result is called. With the latter though there's no guarantee that another sub-test hasn't executed within that period, one which simply doesn't call tmt-report-result.
I've confirmed the following with my testing, which can be viewed in the attached file ( testing_container_regular_tmt-report-result_and_custom_with_display_junit_and_html.txt ):
- The paths to the logs are included in report display
- The log & data paths are correct in report html
- The generated junit.xml for report junit contains an entry for each failure which includes the contents of the log file for each supplied to tmt-report-result.
Wouldn't it make sense to include https://github.com/teemtee/tmt/issues/3041 in this PR? :)
Wouldn't it make sense to include #3041 in this PR? :)
-1 from me, two different topics, and this PR is already quite large, changing core structures of reporting results, let's not piggyback a minor variable addition on to it.
Wouldn't it make sense to include #3041 in this PR? :)
-1 from me, two different topics, and this PR is already quite large, changing core structures of reporting results, let's not piggyback a minor variable addition on to it.
ack. Sorry for the noise.
Summary from the hacking session yesterday:
- @pfdaly, would you include a short release note explaining the change / new behavior?
- @jencce, could you please check that the current change covers your use case well?
@juk, fyi, this might be a bit related to #2826, could you please have a quick look?
Summary from the hacking session yesterday:
* @pfdaly, would you include a short release note explaining the change / new behavior? * @jencce, could you please check that the current change covers your use case well?
Yes, I'm testing in https://github.com/teemtee/tmt/issues/2086
/packit build
/packit build
Thanks for the updates, added a couple comments. Also would like to confirm with @juk and @jencce the following questions:
* The proposed change requires that `result: custom` needs to be added to test metadata, is that ok for you that possibly many tests will have to be updated? Can you confirm we don't want to change the default behaviour?
As Karel mentioned in this comment, we can use fmf inheritance here. This is good for me.
* If we require this explicit enablement, shouldn't we rather use `result: restraint` to make this clearly distinct from "real" custom results?
As far as I learned from testing 2086 branch, this result format can work without restraint.
By "real" custom results, do you mean beakerlib format
* I remember @juk mentioning during one of the discussions that it is not desirable to have the individual results reported by `rstrnt-report-result` expanded by default, but this approach means exactly that, e.g. if there are 100 `report-result` commands in the test script you would see them all displayed as 100 regular tests results. Is that ok?
OK for me. Our testsuites have hundreds of sub testscases.
* If the log path is not provided, e.g. `tmt-report-result /test/good PASS` is called without the `-o /tmp/good` option, there is no output included in the logs, is that ok or do we want to include the "main" output of the whole script in such cases?
Please include the main output. If there is a typo or bug in the testcase, we can still have something to debug on.
I'm afraid we should clarify these expectations before merging the pull request.
Besides all this, for your reference, with 2086 branch now there is an issue in reporting directly to ReportPortal, which can be addressed separately.
If the log path is not provided, e.g.
tmt-report-result /test/good PASSis called without the-o /tmp/goodoption, there is no output included in the logs, is that ok or do we want to include the "main" output of the whole script in such cases?Please include the main output. If there is a typo or bug in the testcase, we can still have something to debug on.
Ack. Here's a simple reproducer:
/plan:
provision:
how: container
discover:
how: fmf
execute:
how: tmt
/test:
result: custom
test: |
echo ok
tmt-report-result good PASS
echo ko
tmt-report-result bad FAIL
In this case no output is included in the display, junit or html report. @pfdaly, could you please look into this?
Fixed error message check in 12337155, everything else green.