Extract failures from the failing check logs
Failure log parsing doesn't work correctly with avc check enabled. Beakerlib log detection in failure() function in results.py searches for
:: FAIL :: but if the test failed only due to avc check (i.e. test itself has passed) then there won't be any FAIL. In this situation, the alternate branch for generic log processing is applied. I believe that we should improve beakerlib log detection, e.g. by searching also for other beakerlib typical strings (":: PASS ::". ":: INFO ::" etc.).
I did some testing and I see these issues.
- the plugin attempts to parse a failure/traceback log from the original test log even though the test had failed only due to an avc check. This is of course doomed to fail. Unfortunately, result object doesn't seem to provide a "nice" way of checking if the test has failed or just a check has failed.
- dumb beakerlib log detection in
failure()function in results.py. It searches for ":: FAIL ::" but if the test failed only due to avc check (i.e. test itself has passed) then there won't be any FAIL. In this situation, the alternate branch for generic log processing is applied. I believe that we should improve beakerlib log detection, e.g. by searching also for other beakerlib typical strings (":: PASS ::". ":: INFO ::" etc.) or something even better (test framework value?). - AVC failures should be part of the failure/traceback log. I am not sure about other check types. Probably they should implement own parsing function which would extend the failure log.
One additional issue I have notices is dumb beakerlib log detection in
failure()function in results.py. It searches for ":: FAIL ::" but if the test failed only due to avc check (i.e. test itself has passed) then there won't be any FAIL. In this situation, the alternate branch for generic log processing is applied. I believe that we should improve beakerlib log detection, e.g. by searching also for other beakerlib typical strings (":: PASS ::". ":: INFO ::" etc.).
Below, I am pasting a code that worked for me, definitely not ready for merge and possibly not matching the latest tmt code. Just wanted to illustrate what I did.
# diff -u /usr/lib/python3.9/site-packages/tmt/steps/report/reportportal.py.orig /usr/lib/python3.9/site-packages/tmt/steps/report/reportportal.py
--- /usr/lib/python3.9/site-packages/tmt/steps/report/reportportal.py.orig 2025-03-27 08:52:18.000000000 +0100
+++ /usr/lib/python3.9/site-packages/tmt/steps/report/reportportal.py 2025-04-10 16:32:40.811204108 +0200
@@ -481,6 +481,10 @@
curr_description = self.data.launch_description
return curr_description
+ def _parse_avc(self, log: str) -> str:
+ matches = re.findall(r'(type=(USER)?AVC.*?)[\n$]', log)
+ return [m[0] for m in matches]
+
def upload_result_logs(
self,
result: tmt.result.BaseResult,
@@ -518,13 +521,22 @@
# Optionally write out failures only for results which implement the failures callable
if hasattr(result, "failures") and index == 0 and write_out_failures:
- message = _filter_log(
- result.failures(log),
- settings=LogFilterSettings(
- size=self.data.traceback_size_limit,
- is_traceback=True,
- ),
- )
+ # in case the test failed only due to checks, no point parsing test log
+ message = ''
+ # seems we do not have better way of checking if to expect failures in the test log
+ if 'original test result: pass' not in result.note:
+ message += _filter_log(
+ result.failures(log),
+ settings=LogFilterSettings(
+ size=self.data.traceback_size_limit,
+ is_traceback=True,
+ ),
+ )
+ for check in result.check:
+ if check.name == 'avc' and check.result == ResultOutcome.FAIL:
+ message += f'\n\nAVC messages:\n'
+ l = self.step.plan.execute.read(check.log[0])
+ message += '\n'.join(self._parse_avc(l))
self.rp_api_post(
session=session,
One more thing, even with the change above subresults reporting won't properly report test failing on avc check. Since check is not associated with any particular phase, all phases are reported as green.
As discussed on the chat, this can significantly slow down test results review in ReportPortal, proposing with high priority. Let's confirm on the triage meeting.
Result.original_result was observed as always being set to fail, unexpectedly - it should have been pass when only checks failed.
Result.original_resultwas observed as always being set tofail, unexpectedly - it should have beenpasswhen only checks failed.
Maybe I am looking at a wrong place, but providing some evidence. Below is a passing test with failing avc check and the respective result object. I was looking specifically at name='/setup/rhel-buildroot', result=<ResultOutcome.FAIL: 'fail'>, original_result=<ResultOutcome.FAIL: 'fail'> but maybe I should be rather looking at individual subresults?
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:: Duration: 1s
:: Phases: 1 good, 0 bad
:: OVERALL RESULT: PASS ()
journal.txt: /var/tmp/tmt/run-196/default/plan/execute/data/guest/default-0/setup/rhel-buildroot-1/journal.txt
fail avc (after-test check)
avc.txt: /var/tmp/tmt/run-196/default/plan/execute/data/guest/default-0/setup/rhel-buildroot-1/checks/avc.txt
pass /Test
Test_tmp.wJEDd0XuEK: /var/tmp/tmt/run-196/default/plan/execute/data/guest/default-0/setup/rhel-buildroot-1/data/Test_tmp.wJEDd0XuEK
summary: 1 test failed
total: 1 test failed
report
how: reportportal
suite: /default/plan
test: /setup/rhel-buildroot
Result(name='/setup/rhel-buildroot', result=<ResultOutcome.FAIL: 'fail'>, original_result=<ResultOutcome.FAIL: 'fail'>, note=["check 'avc' failed", 'original test result: pass'], log=[Path('data/guest/default-0/setup/rhel-buildroot-1/output.txt'), Path('data/guest/default-0/setup/rhel-buildroot-1/journal.txt')], start_time='2025-04-15T06:24:47.187858+00:00', end_time='2025-04-15T06:24:48.462196+00:00', duration='00:00:01', serial_number=1, fmf_id=FmfId(fmf_root=None, git_root=None, default_branch=None, url='https://TESTS_REPO/tests.git', ref='main', path=None, name='/setup/rhel-buildroot'), context={}, ids={'id': None, 'extra-nitrate': None, 'extra-task': None}, guest=ResultGuestData(name='default-0', role=None, primary_address='10.0.186.213'), subresult=[SubResult(name='/Test', result=<ResultOutcome.PASS: 'pass'>, original_result=<ResultOutcome.PASS: 'pass'>, note=[], log=[Path('data/guest/default-0/setup/rhel-buildroot-1/data/Test_tmp.wJEDd0XuEK')], start_time=None, end_time='2025-04-15T06:24:48.051963+00:00', duration=None, check=[])], check=[CheckResult(name='avc', result=<ResultOutcome.FAIL: 'fail'>, original_result=<ResultOutcome.FAIL: 'fail'>, note=[], log=[Path('data/guest/default-0/setup/rhel-buildroot-1/checks/avc.txt')], start_time='2025-04-15T06:24:52.012475+00:00', end_time='2025-04-15T06:24:58.168162+00:00', duration='00:00:06', event=<CheckEvent.AFTER_TEST: 'after-test'>)], data_path=Path('data/guest/default-0/setup/rhel-buildroot-1/data'))
url: https://RP_SERVER/ui/#ksrot_personal/launches/all/407ffe73-a421-45fa-80a5-0c00f5fe9503
summary: 1 test failed
Escalated on the stakeholder meeting. We'll try to squeeze this into 1.48.
Related implementation notes from the chat:
propozing to generalize Result.failures() a but, to make it part of other result classes as well, and that it should return the failures, not the whole log when there are no failures
we should upload all failed checks. next week, we can easily start hearing from people using dmesg checks questions about why AVC check is so special that it's aploaded and their favorite check isn't. and I don't think AVC is special enough to be the only check uploaded to reportportal when it comes to failed tests