Support stack trace logging with locals
I'm quite fond of Python's logging framework's ability to be configured to your liking. One of the things I like to do is to add a custom formatter, often to the root file/console logger, that formats the exceptions differently than the default. I do this primarily to enable local capture, as I find that can greatly reduce debugging time.
Example of such a stacktrace
$ robot --loglevel TRACE:INFO --outputdir ./output --xunit junit.xml --debugfile debug.txt --exitonerror --nostatusrc -s Experiments test_data
==============================================================================
Test Data
==============================================================================
Test Data.Experiments
==============================================================================
[ ERROR ] Caught exception!
Traceback (most recent call last):
File "....../Utilities.py", line 56, in raise_an_exception
raise Exception("An exception.")
a_local = True
b_local = 420.69
log = <Logger opso.Utilities (NOTSET)>
Exception: An exception.
Logging test F
Unfortunately, the way robotframework handles it's interactions with the logging framework makes it quite difficult to cleanly do such things, especially if I want it to appear as such in the robot log.
This is my workaround so far is to put this code in a utilities library that is imported as a listener or keyword library:
class FormatterWithLocals(logging.Formatter):
def formatException(self, ei) -> str:
# noinspection PyBroadException
try:
tbe = traceback.TracebackException(*ei, capture_locals=True)
s = "".join(tbe.format())
except Exception:
logging.exception("Exception trying to format a stacktrace with captured locals, trying again without.")
tbe = traceback.TracebackException(*ei, capture_locals=False)
s = "".join(tbe.format())
return s.removesuffix("\n")
# fixme: Access to internals :(
logging._defaultFormatter = FormatterWithLocals()
This access to _defaultFormatter is required because in robot/output/pyloggingconf.py (via function robot_handler_enabled) the root logger gets a new handler appended to it without any way to configure it. This then ends up using the default formatter.
A downside of this workaround (aside from the obviously bad accessing of privates) is that it only applies after robot has imported the relevant library. A way to make this ASAP is to put this in a pre-run modifier.
A few other possible workarounds:
-
Since the handler is first added in robot's main and subsequent calls to
robot_handler_enabledare nops, you could wait until you're inside of the run context and modify the handler by calling it'ssetFormatterfunction. This not require private accesses, but does run the risk of changes in internal behaviour in robot messing up the change without notice. -
Preempt the
robot_handler_enabledby already adding aRobotHandlerto the root logger (perhaps via a logging config file) that is configured as you want it. This is perhaps the cleanest way to handle this, but also requires that therobot_handler_enabledbehaviour stays the same. I don't know if this behaves nicely with what robot expects, given that you'd be adding log messages before it would normally be the case. This also requires that you have a hook somewhere before the normal call happens, the most obvious candidate is a pre-run modifier. -
Preempt robot's main entirely and setup logging before even invoking it. Maybe even monkeypatching the
RobotHandlerorrobot_handler_enabled. This is probably what we'll end up doing long term, as it has benefits for our usecase not relevant to this discussion.
I don't like any of these solutions. I would prefer to see robot support a way to setup the logging configuration via a more standardized mechanic (logging.config). I'm making this issue as a feature request, but I don't exactly know what the feature would look like. The logging in robot is already quite complex, so I wouldn't want to overload that even further.
Better configuration of how Robot uses logging would be good. One concrete problem we have faced is that some tools (e.g. Selenium, ping @emanlove) log a lot on the debug level and it would be convenient to be able to filter those messages out altogether without affecting other debug messages.
I personally don't know logging well enough to propose a good solution. I can study this at some point, but I doubt I have time in the near future. To get something done soon would thus require someone else studying this and proposing a solution.
I would like to hear more about your ideas @dries007 around logging. There are a few ways we can prototype and demonstrate ideas.
And as Pekka mentioned we have some need/desires to do some filter and/or reduction of debug logging with SeleniumLibrary.
I'll put a ticket in our backlog to do some prototyping / investigation, but no guarantees about when/if it gets picked up. Now I can only spend so much time on this. If there is an agreed approach I can do the same for implementing a feature and creating a PR, we have no issues contributing to open source work, especially if we use it a lot :smile:. (Though: I want to avoid we do some impementation work with no hope of upstreaming.)
Of the top of my head:
With logging you can filter log level per logger instance per handler via filters. So the RobotHandler could be selectivly deaf to some logging messages from some logger names (for example, SeleniumLibrary could "overwrite" the default to be INFO for RobotHandler (which ends up in the HTML), WARNING for the ConsoleLogger (stdout/stderr), DEBUG for debugging.txt.
Outside of our Robot runtime I have the same issue with some libraries (for example GRPC). Setting all loggers to debug causes a massive avalance of debug stuff. I solve this by this config file.
# hpack is a logger that spews DEBUG, qtf is our tool, root is the root logger.
[loggers]
keys=root,qtf,hpack
[handlers]
keys=consoleHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler
[logger_qtf]
level=DEBUG
qualname=qtf
handlers=
# Limit hpack (used by GRPC) to INFO to prevent 10+ lines of spam every single RPC call.
[logger_hpack]
level=INFO
qualname=hpack
handlers=
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[formatter_simpleFormatter]
format=%(asctime)s [%(levelname)s] %(name)s: %(message)s
Somewhere very early this happens:
if os.path.isfile("logging.conf"):
logging.config.fileConfig("logging.conf")
logging.info("Loaded logging.conf.")
Today I would maybe opt for a different file format (probably TOML) and load it as a dict, but this was quick at the time.
I don't know if this file format supports filters, that might have to be an in-code-only thing.
@dries007 I also worked around finer control over library logging by plugging in at listener. It works but there is a big but - not really guaranteed to work in long term - because, as you yourself pointed out, it does not use any specified/documented behaviour or a public API.
If RF had a set of clearly defined loggers and handlers with a clear plug-in point in the execution lifecycle where application-level logging configuration would not change - this would be more stable but still not really a nice solution to work with.
I very much agree that explicit file-based logging configuration that would be (under the hood) passed directly to fileConfig would be, long term, the best solution.