Log levels should be aligned with syslog
We are seeing a bunch of logs from connect-api that indicate success but the log level seems off. An example log:
{
"log_message": "(I) GET /v1/vaults/XXX/items/XXX completed (200: OK)",
"timestamp": "2022-07-31T11:05:55.785430724Z",
"level": 3,
"scope": {
"request_id": "XXX",
"jti": "XXX"
}
}
The level says 3 which would map to syslog level Error but the message indicates (I) which seems like an info log. This is causing most of the logs from connect-api to show up as Errors in our log stack, even though, this seems like an info level log.
- Which is the correct log level for this?
- Could you use syslog levels for the
levelfield?
Hey PhillippBs,
I have added this issue to our internal tracking for investigation. Has there been a change in the logging levels you've seen in Connect, or is this only about the syslog/connect log level mismatch?
It's true that the connect levels don't match syslog levels. Currently, Connect uses:
1 - error
2 - warn
3 - info
4 - debug
5 - trace
So in this case, 3 would be correct for an Info level log message. The logging level can be set with OP_LOG_LEVEL, and by default is info.
Hey @kpcraig,
Thanks for replying so quickly. We have just recently adopted 1password as a secret management solution inside our k8s cluster and noticed that the log levels seem to mismatch most of the common standards.
In general, I think it would be great if the log levels are aligned with the syslog levels. This would lead to all standard log backends to ingest them correctly.
Wanted to register my support for aligning the log levels with syslog levels. The mismatch is currently causing our log aggregator (Datadog) to mark all INFO logs from 1Password Connect as error logs, which makes it difficult for us to set up alerting and instrumentation around error logs in our other applications.
I encountered this issue specifically with Datadog categorizing all logs as ERROR. @ThePletch or anyone else using Datadog, this is how I was able to workaround the issue by adding a Datadog Log Pipeline.
Here's what my logs_custom_pipeline Terraform looks like:
resource "datadog_logs_custom_pipeline" "connect_pipeline" {
filter {
query = "source:op-connect-api"
}
name = "connect-pipeline"
is_enabled = true
processor {
category_processor {
target = "level"
category {
name = "error"
filter {
query = "@level: \"1\""
}
}
category {
name = "warn"
filter {
query = "@level: \"2\""
}
}
category {
name = "info"
filter {
query = "@level: \"3\""
}
}
category {
name = "debug"
filter {
query = "@level: \"4\""
}
}
category {
name = "trace"
filter {
query = "@level: \"5\""
}
}
name = "connect-api category processor"
is_enabled = true
}
}
processor {
status_remapper {
sources = ["level"]
name = "level status remapper"
is_enabled = true
}
}
}