azure-functions-host icon indicating copy to clipboard operation
azure-functions-host copied to clipboard

[Custom handlers] Add ability to specify level for each log message

Open anthonychu opened this issue 4 years ago • 10 comments

Currently custom handler logs are an array of strings. Is it possible to allow setting the type (Error, Warning, etc) of the log message?

We could accept objects instead of strings. But this would be a breaking change.

Another possibility is to support prefixing the string with the level. Something like:

"Logs": [
    "[WARNING] this is a warning"
]

@pragnagopa @yojagad

anthonychu avatar Sep 06 '20 07:09 anthonychu

Adding support to prefixing would mean that we would be doing a string parse on each log to figure out the level - this could be perf impact.

Another option

  • Leave Logs as strings - default to info level
  • Introduce objects for logs with log level - this change will be backward compatible.

@anthonychu - Can you please share design proposal for introducing LogLevel? Also, please do assign a priority for the issue. Thanks!

pragnagopa avatar Sep 08 '20 15:09 pragnagopa

I am encountering this, and had filed an issue at https://github.com/Azure-Samples/functions-custom-handlers/issues/17

Fishrock123 avatar Oct 23 '20 23:10 Fishrock123

This issue is highly relevant.

I recently started experimenting with custom handlers in Go, so I've just come across the issue of handling logs. To me it seems a bit strange to return all the logs in the response to the host. What if the relative timing of the logs is important? This information is lost in the current solution.

A natural solution would allow me to use any logging framework (maybe as simple as fmt.Println(...)) to send my logs to stdout, and have them appear in Application Insights with sensible values for severity etc. With the current solution logs sent to stdout does not include the invocation context, so I can't tell which function or which function invocation my log comes from.

mattias-fjellstrom avatar Feb 09 '21 15:02 mattias-fjellstrom

For my custom functions I just write to stdout and stderr from the worker process and ignore the Logs array in the HTTP response entirely. That gives me two levels of logging (stdout == info, stderr == error). Not ideal, but better than encoding the level into the message.

It also removes the need to encode timestamps into the message, since the host already has callbacks to log stdout and stderr as soon as it gets them.

Arnavion avatar Feb 15 '21 19:02 Arnavion

@Arnavion I also went in that direction. I log everything to stdout in a custom JSON-format, and then I utilise the parse_json() (reference) function when I query my logs in Application Insights. This allows me to get sensible logs with the levels I want. It is not ideal, and you need to spend a little time setting up the queries (and save them).

I like it how AWS have solved it in CloudWatch with their CloudWatch insights. You just log to stdout in JSON-format and behind the scenes it is automatically parsed and you can search for any field in your JSON log out of the box.

mattias-fjellstrom avatar Feb 16 '21 06:02 mattias-fjellstrom

Yes. Azure Functions' "logs in the HTTP response as a JSON array" method (the topic of this issue) is fundamentally a non-starter. Easiest example - any function that hangs and is killed because of timeout will never get around to writing the HTTP response out, so it will lose all its logs, which are precisely the logs you'd want the most. Also, it requires your programming language / libraries to support writing a custom logger that aggregates logs into an array scoped to the request handler, which is a non-trivial task in of itself. (Loggers are usually global, and multiple Function invocations can run in parallel.)

The only sane logging a custom handler can do is via stdout/err.

So the ideal solution is to parse logs from the worker process as they're generated, plus in some format that allows at least log level and ideally full structured logging like .Net Functions have.

Arnavion avatar Feb 16 '21 09:02 Arnavion

Another approach I've been prototyping for my functions is to skip the middleman and log to Log Analytics directly, via https://docs.microsoft.com/en-au/azure/azure-monitor/logs/data-collector-api#sample-requests.

Pros:

  • Full freedom of log levels and structural fields (including the function invocation ID which is the target of another related issue in this tracker).

  • Logs appear to show up instantaneously after they're reported (apart from a delay any time there are schema changes), which makes it work well for local testing with func as well as manually-triggered functions in the cloud.

Cons:

  • The logger implementation is more complicated, since it has to be non-blocking (you don't want to wait for an HTTP request on every log), it has to batch requests (you don't want to make an HTTP request for every individual log), ensure flushing before returning the HTTP response to the host (to ensure the process doesn't get killed before flushing), and itself has a failure mode now (can't make the HTTP request).

Arnavion avatar Feb 27 '21 20:02 Arnavion

I ran into this issue a while ago. I decided to just use Information and Error level by printing the log to either stdout or stderr. Then I realized that even simply printing to stderr, Azure Log Stream seems to be able to "infer" the log level, maybe from the log content.

image

Both of these message are simply printed to stderr. Is there any document about this behavior?

Jason5Lee avatar Apr 20 '22 07:04 Jason5Lee

It seems that as long as the message contains warn (ignore capitalizing) it will be treated as a warning.

Jason5Lee avatar Apr 20 '22 08:04 Jason5Lee

Hi @pragnagopa/@fabiocav ..I am facing the same issue, where logs are working fine in local VS code but after deployment I cannot see them in the panel. I did try setting up the RUST_LOG as an environment variable but still no luck. Logs are not printing. Using env_logger and log from Rust at "info" level.

horizondeep avatar Feb 28 '24 05:02 horizondeep