timestamp is in weird format
What is the issue?
Hey there 👋🏼,
We noticed that the linkerd-proxy container created weird values for timestamp field. It is not a timestamp in its meaning; it's more like an occurrence of an event since the container's lifetime. See the example log below.
The current situation is misleading and confusing, especially when you use a central logging system where this field is dedicated and treated as a date/unix timestamp 😢.
How can it be reproduced?
kubectl logs -n <namespace> <some_app_with_linkerd-proxy> -c linkerd-proxy
Logs, error output, etc
The log example of linkerd-proxy:
{"timestamp":"[ 10132.943961s]","level":"INFO","fields":{"message":"Continuing after timeout: linkerd_proxy_http::version::Version protocol detection timed out after 10s"},"target":"linkerd_detect","spans":[{"name":"outbound"},{"addr":"10.163.70.5:9000","name":"proxy"}],"threadId":"ThreadId(2)"}
Please take a look at the value of the timestamp. Not only is it not a date/timestamp, but also it is, for some reason, an array 🤦🏼.
output of linkerd check -o short
N/A
Environment
K8s Version 1.24.7, running on AWS EKS. Release 2.210.0 (85db2fc) by linkerd on 2023-09-21T21:24:58Z
Possible solution
Change the value for the timestamp to something more common, for example, ISO-8601 format, like 2024-04-25T08:43:41Z.
This format is already used for linkerd-init container.
Additional context
We deploy linked via Helm chart. We use json logging format.
Would you like to work on fixing this bug?
no
Hey @Rohlik, so sorry for the delay here! Technically, the timestamp logged is the number of seconds since the start of the proxy, both because that's faster to generate and because, for many proxy issues, the time-since-start is actually more useful than anything else.
This is not uncommon in Kubernetes logs, and in fact kubectl logs has the --timestamps flag for exactly this purpose: it adds ISO8601 timestamps to each line. Setting up your aggregator to use that is likely the easiest path forward.
Thank you for your time and your reply @kflynn. I appreciate the background you provided.
The biggest issue with using the --timestamps is that the output will look like the below, and that is a multi-format 😢 log.
Many logging tools, like Fluentd, will produce errors like "message is not a valid JSON".
2024-05-04T07:05:11.571258714Z {"timestamp":"[950763.773856s]","level":"INFO","fields":{"message":"Connection closed","error":"direct connections must be mutually authenticated","client.addr":"10.10.10.10:55668"},"target":"linkerd_app_core::serve","spans":[{"name":"inbound"}],"threadId":"ThreadId(3)"}
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.
Bump
As of this week's edge release (to be released on Thursday), JSON-formatted proxy logs will include proper ISO 8601 timestamps.