c3c icon indicating copy to clipboard operation
c3c copied to clipboard

Enhancing benchmark engine

Open alexveden opened this issue 9 months ago • 1 comments

Recently I was doing some benchmarking and found some rough edges that can be polished, some of these might be my misunderstanding though.

  • [ ] project.json doesn't support finding benchmarks in test section, i.e. test-sources. And benchmarks are only picked up when placed in one of sources directories. I think it would be cleaner to have benchmarks near test suite code.
  • [ ] I found that the only way to prime benchmark with real data (e.g. read from file) is using an @initialize function which works as program startup function (this is how benchmark example works), however, I've found that this @initialize function from benchmark code was compiled in the main project as regular source + also for test runner too. So would it be possible to isolate benchmark code from project main code?
  • [ ] Do we need individual bench case setup/teardown capabilities? Or global init function which specific for a bench suite...
  • [ ] c3c has bench and benchmark commands, and also target type bench is confusing. I think we need more clarity there, maybe with better docs. I tried to add benchmark or bench target into project.json but failed.
  • [ ] Do we need bench test filters as we have in test runner? It would be nice to have.
  • [ ] More a cosmetic request: probably it would be better to display number of steps and explicitly state that timing number as average per 1 function call.
  • [ ] Current timing method is pretty much coarse, because it prone to OS scheduler pauses. For example, in between of bench function run OS might do a very long re-scheduling (hundreds of milliseconds), which affect average numbers of a timer. There are some techniques to overcome this. Also, one might want to have a benchmark of cold cache, which is also useful of understanding how code could perform in different environments.

Let's discuss how to deal with above.

alexveden avatar Mar 01 '25 08:03 alexveden

  1. It seems wrong that tests and benchmarks are in the same folder? But benchmarks could have its own folder certainly.
  2. You can guard an initializer with @if(env::BENCHMARKING). It will then only be compiled when benchmarks are run.
  3. I am not sure, I didn't implement the first benchmark runner myself, and it's fairly low priority for me.
  4. Target type bench and benchmark commands are indeed confusing. There is a similar problem with tests. See #1651
  5. Benchmark filters are indeed being requested,
  6. It could be improved indeed
  7. Correctly benchmarking is indeed difficult. I have pondered removing the benchmarking feature entirely.

lerno avatar Mar 05 '25 13:03 lerno