opentelemetry.io icon indicating copy to clipboard operation
opentelemetry.io copied to clipboard

Add Java benchmarks page

Open martinkuba opened this issue 2 years ago • 9 comments

The Java SDK has microbenchmark tests that are automatically run on every push to main. The results are currently published at open-telemetry.github.io/opentelemetry-java/benchmarks.

This PR adds a page here with the same results. This is similar to what has already been done for Javascript and the Collector load tests. https://opentelemetry.io/docs/instrumentation/js/benchmarks/ opentelemetry.io/docs/collector/benchmarks

Related to https://github.com/open-telemetry/opentelemetry.io/pull/3342.

I have also moved the CSS that is shared among the three benchmarks pages to a separate CSS file.


Previews:

  • https://deploy-preview-3352--opentelemetry.netlify.app/docs/instrumentation/js/benchmarks/
  • https://deploy-preview-3352--opentelemetry.netlify.app/docs/collector/benchmarks/

martinkuba avatar Oct 05 '23 21:10 martinkuba

Could you move the factoring out of the styles (and adjustment to the JS bm) to a separate PR. That could land sooner. Thanks!

@chalin Opened a new PR https://github.com/open-telemetry/opentelemetry.io/pull/3482

martinkuba avatar Nov 02 '23 23:11 martinkuba

Can this PR be merged, is there anything pending?

svrnm avatar Dec 07 '23 12:12 svrnm

@martinkuba - pls rebase and resolve conflicts. Thx

chalin avatar Jan 11 '24 13:01 chalin

FYI, also see the following related issue, which we should address separately from this PR:

  • #3760

chalin avatar Jan 11 '24 13:01 chalin

@svrnm @chalin I think before we put more effort into this particular branch, we need to get clarification if it will be merged. Jack has concerns about publishing these results and submitted a blocking review.

tylerbenson avatar Jan 12 '24 19:01 tylerbenson

@tylerbenson @jack-berg is this still blocked?

svrnm avatar Apr 11 '24 08:04 svrnm

Looks like it. I think we'd close and reopen if there are news.

theletterf avatar May 03 '24 06:05 theletterf

I would still like to see this merged. @jack-berg do you still oppose having this data on the main site? The set of benchmarks was chosen to focus on the span lifecycle. I didn't just pick random benchmarks.

If you would like to have a discussion about including different benchmarks, let me know and I can bring it up in the Java SIG meeting.

tylerbenson avatar May 03 '24 14:05 tylerbenson

If you would like to have a discussion about including different benchmarks, let me know and I can bring it up in the Java SIG meeting.

Let's talk about it in the SIG. We're likely going to need to write new benchmarks from scratch. We need benchmarks which:

  • Reflect the workflows we expect from users
  • Are high level enough to understand so that we can describe them and not be misinterpreted by a casual user
  • Exist for each of the signals with standardization across signals where it makes sense

The benchmarks which are published today are a strange selection to present to users:

  • MultiSpanExporterBenchmark: Tests sending a predefined collection of spans to a set of noop exporters via MultiSpanExporter. This essentially just tests how fast we can iterate through the exporters and aggregate their CompletableResultCodes. No OTLP or log or zipkin exporters are exercised. Why would a user care about this?
  • FillSpanBechmark: Starts a span and adds 4 attributes to a span. Weird thing to watch over time as its very unlikely to change.
  • SpanBenchmark : Starts a span, adds an event and ends it. Tests on 1, 2, 5, and 10 threads. This overlaps a bit with FillSpanBenchmark but both are oddly specific and leave out lots of common span API surface area.

jack-berg avatar May 03 '24 22:05 jack-berg

I will close this PR now since there was no activity for a few months now, please re-open it or raise a new one, when this is ready to be added to docs

svrnm avatar Aug 01 '24 09:08 svrnm