smokescreen
smokescreen copied to clipboard
fix: timer metrics unit
Histograms are built using the default bucket definitions:
If
Bucketsis left as nil or set to a slice of length zero, it is replaced by default buckets.The default buckets are
DefBucketsif no buckets for a native histogram (see below) are used, otherwise the default is no buckets.
And DefBuckets is:
var DefBuckets = [][float64](https://pkg.go.dev/builtin#float64){.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10}
DefBucketsare the default Histogram buckets.The default buckets are tailored to broadly measure the response time (in seconds) of a network service.
The timers were measuring values in milliseconds, whereas the underlying buckets are configured to store number as seconds.
I spotted the misconfiguration by checking the DNS resolution timers on our deployments: most of the timers are between 5 and 10 ms (to the last 2 buckets) and all the initial buckets are almost not used.
Changing to seconds instead of milliseconds would fill up in our case the first 2 or 3 buckets and show the higher latencies (if any) more correctly.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.
I signed the CLA :)