robotframework icon indicating copy to clipboard operation
robotframework copied to clipboard

Failure stdout to show where failure happened

Open mkorpela opened this issue 5 years ago • 6 comments

WHY Current normal Robot Framework command line output does not show where a failure happened. It will only show failed test case name and a failure message.

Especially when executing tests on a continuous integration server or service, such as GH Actions, Travis CI or CircleCI, first thing I will look at is the terminal output related to the specific task that failed. Showing directly where the failure happened, would speed up debugging and most likely also make it more likely that easy to fix bugs will be get fixed faster.

Logs are only generated at the end of execution and will also require by default a manual download. This is why I believe they should be used mostly in more demanding debugging situations or when reporting a failure happens far away from time when a failure is fixed.

pytest has a very nice and informative way of showing failure output and this many times points directly the root cause of the problem. For instance a change in input data.

WHAT Let's show where a failure happens directly on the command line output.

HOW I propose that the failure message that gets printed to stdout should have the following components:

  1. Failure message from the error as it is already shown
  2. File and line number of the step that is failing
  3. The step as it is in the file (possibly some parts truncated if too big)
  4. The step arguments resolved to values (possibly some parts truncated if too big)

A proof of concept of this can be found from https://github.com/robotframework/robotframework/commit/49dede55654d390bd40fd9e98716330e09ffe0ba

Optimal output would in my opinion look something like this:

==============================================================================
Test
==============================================================================
Test.04 Frames
==============================================================================
Test.04 Frames.Deep Frames
==============================================================================
First level                                                           | PASS |
------------------------------------------------------------------------------
Second level                                                          | PASS |
------------------------------------------------------------------------------
Third level                                                           | FAIL |
> test/resources/frame_stuff.robot:15
> Get Text    ${selector}    ==    This is ${part}
E Get Text    id=b >>> id=c >>> id=cc    ==    This is g
E Property innerText `This is c` should be `This is g`
------------------------------------------------------------------------------
Third level from second                                               | PASS |
------------------------------------------------------------------------------
Test.04 Frames.Deep Frames                                            | FAIL |

mkorpela avatar Sep 14 '20 14:09 mkorpela

I don't like having the detail in-line with all of the other output. I've always felt the output is too verbose, this just makes it worse. Would it make sense to add a new section in the output, after the last test but before the summary? For example, it might look something like this:

==============================================================================
Example                                                                       
==============================================================================
Example 1                                                             | PASS |
------------------------------------------------------------------------------
Example 2                                                             | FAIL |
abc != ABC
------------------------------------------------------------------------------
Example 3                                                             | PASS |
------------------------------------------------------------------------------
Example 4                                                             | FAIL |
Expected page to contain "welcome, user"
------------------------------------------------------------------------------
Example                                                               | FAIL |
==============================================================================
Error details:
- Example 2: tests/example2.robot:6
> Should be equal  abc  ABC
E abc != ABC
- Example 4: tests/example.robot:12
> Page should contain  welcome, user
E Expected page to contain "welcome, user"
==============================================================================
4 critical tests, 2 passed, 2 failed
4 tests total, 2 passed, 2 failed
==============================================================================
Output:  /private/tmp/output.xml
Log:     /private/tmp/log.html
Report:  /private/tmp/report.html

boakley avatar Sep 14 '20 14:09 boakley

I like your example output better Bryan, but I think that time playes an important role here. So command line output could be already available before test execution is over. Once it is over then log is also available. But giving the location in the output at the time it happens, gives a possible speed up in testing and fixing iteration.

mkorpela avatar Sep 14 '20 15:09 mkorpela

Although I agree too much clutters when we are displaying at INFO level, I would put forth that it is better to have the (simplified) output at the "point of need" or up in the test case. That way the information is together with the other test case output where for example the error is displayed.

emanlove avatar Sep 14 '20 15:09 emanlove

another example how it could look like with --stacktrace

==============================================================================
01 CarConfig                                                                  
==============================================================================
Configure Car with Pass                                               
Traceback (most recent call last):
  File  "/Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/01_CarConfig.robot:27", in "01 CarConfig"
    Configure Car with Pass    
  File  "/Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/01_CarConfig.robot:9", in "Configure Car with Pass"
    functional_keywords.Open CarConfig As Admin    
  File  "/Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/functional_keywords.resource:69", in "functional_keywords.Open CarConfig As Admin"
    technical_keywords.Set Password    @RBTFRMWRK@
  File  "/Source/RF-Schulung/02_RobotFiles/TestCases/14_Browser/technical_keywords.resource:66", in "technical_keywords.Set Password"
    Browser.Fill Secret    id=input_password_NOT_FOUND    ${password}
Configure Car with Pass                                               | FAIL |
Setup failed:
Could not find element with selector `id=input_password_NOT_FOUND` within timeout.
------------------------------------------------------------------------------
01 CarConfig                                                          | FAIL |
1 test, 0 passed, 1 failed
==============================================================================

i question to discuss: Should the paths be relative to ${EXECDIR} or should they be absolute...

Snooz82 avatar Sep 16 '20 15:09 Snooz82

About exception representation https://www.diva-portal.org/smash/get/diva2:1052658/FULLTEXT01.pdf (might be useful but not yet verified)

mkorpela avatar Sep 24 '20 07:09 mkorpela

This is really much needed. At least when you have written your own stuff in Python and it fails somewhere and you have absolutely no idea where it happened. Something like IndexError: list index out of range isn't very informative.

otsoni avatar Sep 07 '23 10:09 otsoni