semantic-conventions
semantic-conventions copied to clipboard
Add node.js and v8 js engine runtime metrics semantic conventions
Fixes #990
Changes
Adding semantic conventions for:
- Node.js runtime metrics
- V8 Js Engine metrics
Merge requirement checklist
- [x] CONTRIBUTING.md guidelines followed.
- [x] Change log entry added, according to the guidelines in When to add a changelog entry.
- If your PR does not need a change log, start the PR title with
[chore]
- If your PR does not need a change log, start the PR title with
- [ ] schema-next.yaml updated with changes to existing conventions.
I wasn't sure if I need to update something on the next, since there was no changes to existing semantic. This is a new one.
cc @open-telemetry/javascript-approvers
I am wondering whether or not it should be called NodeJS. What if someone uses Deno or Bun?
I am wondering whether or not it should be called NodeJS. What if someone uses Deno or Bun?
what about just js ?
I am wondering whether or not it should be called NodeJS. What if someone uses Deno or Bun?
what about just
js?
I mean I guess, but also feel like it can be confused with the language instead of the runtime environment
I mean I guess, but also feel like it can be confused with the language instead of the runtime environment
if we want to go be definition, it would be runtimejs
I mean I guess, but also feel like it can be confused with the language instead of the runtime environment
if we want to go be definition, it would be
runtimejs
jsruntime sounds maybe better? idk, might need some extra opinions
For my opinion different runtimes need to have diferent preffix. For example: node - nodejs bun - bun deno - denojs etc... Or some attribute that specify what runtime is currently run. Or metric idk.
Also i think for version we need to set metric like it does in the prom-client library. Attribute in all metrics with version, for my opinion, it's not a good solution.
I mean I guess, but also feel like it can be confused with the language instead of the runtime environment
if we want to go be definition, it would be
runtimejsjsruntime sounds maybe better? idk, might need some extra opinions
I think we should make it clear that it is only for NodeJS not other runtimes like deno, bun, or something else. JS is just a language but the underlying runtimes differ a lot. I would argue for something along the lines of nodejs, noderuntime, or nodejsruntime.
I mean I guess, but also feel like it can be confused with the language instead of the runtime environment
if we want to go be definition, it would be
runtimejsjsruntime sounds maybe better? idk, might need some extra opinions
I think we should make it clear that it is only for
NodeJSnot other runtimes likedeno,bun, or something else. JS is just a language but the underlying runtimes differ a lot. I would argue for something along the lines ofnodejs,noderuntime, ornodejsruntime.
It may be worth splitting the v8 semconv from the NodeJS semconv. The v8 engine is used by a few different JS runtimes and it would be worth making them consistent between runtimes. Most of the metrics returned by the v8 module are returned directly from the v8 engine such as getHeapStatistics. The node metrics returned by the perf_hooks module however, are specific to NodeJS.
thank you for all the comments, I'm looking and will be replying to them 👀
New updates:
- Metrics separated in 2: node.js (containing only the node.js specific metrics), v8 js engine metrics (that can be used by other js runtime, such as deno)
- replaced metrics where I was using type attributes to define the type, now they're separate metrics (for example, heap size and eventloop delay)
From the discussion on the SIG meeting, keeping this PR just for node and created a new one for the v8 on https://github.com/open-telemetry/semantic-conventions/pull/1066
One last question from my side. I've checked that other metrics of time make use of histogram as instrument. Some examples:
- https://github.com/open-telemetry/semantic-conventions/blob/cde003cd371ef8cc802f275bcfed15cff8fd6a48/model/metrics/jvm-metrics.yaml#L49
- https://github.com/open-telemetry/semantic-conventions/blob/cde003cd371ef8cc802f275bcfed15cff8fd6a48/model/metrics/go-metrics.yaml#L77
Here all the metrics related to delay and using s as unit use the gauge instrument. Probably a naive question but why is that?
Here all the metrics related to delay and using s as unit use the gauge instrument. Probably a naive question but why is that?
That is a limitation of this type of metric for node, meaning, I'm using this more because is from the event loop and not because is a duration (I even use histogram on duration on other metrics). The gist is that the values for min, max, p50, p90, p99 for these metrics come from the builtin histogram from the node runtime and is not possible to convert it to otel histogram, because we can only query the distribution not get the entire histogram object. This is why you will notice I have a separate metric for each value (min, max, etc), instead of just one called delay that would have everything.
Hope that helps clarify!
Here all the metrics related to delay and using s as unit use the gauge instrument. Probably a naive question but why is that?
That is a limitation of this type of metric for node, meaning, I'm using this more because is from the event loop and not because is a duration (I even use histogram on duration on other metrics). The gist is that the values for min, max, p50, p90, p99 for these metrics come from the builtin histogram from the node runtime and is not possible to convert it to otel histogram, because we can only query the distribution not get the entire histogram object. This is why you will notice I have a separate metric for each value (min, max, etc), instead of just one called delay that would have everything.
Hope that helps clarify!
It helps a lot. Thank you! :)
That is a limitation of this type of metric for node, meaning, I'm using this more because is from the event loop and not because is a duration (I even use histogram on duration on other metrics). The gist is that the values for min, max, p50, p90, p99 for these metrics come from the builtin histogram from the node runtime and is not possible to convert it to otel histogram, because we can only query the distribution not get the entire histogram object. This is why you will notice I have a separate metric for each value (min, max, etc), instead of just one called delay that would have everything.
Could you please add a similar note to the md file itself?
@open-telemetry/javascript-approvers any last minute feedback?
Could you please add a similar note to the md file itself?
Done