c3c
c3c copied to clipboard
Enhancing benchmark engine
Recently I was doing some benchmarking and found some rough edges that can be polished, some of these might be my misunderstanding though.
- [ ]
project.jsondoesn't support finding benchmarks in test section, i.e.test-sources. And benchmarks are only picked up when placed in one ofsourcesdirectories. I think it would be cleaner to have benchmarks near test suite code. - [ ] I found that the only way to prime benchmark with real data (e.g. read from file) is using an
@initializefunction which works as program startup function (this is how benchmark example works), however, I've found that this@initializefunction from benchmark code was compiled in the main project as regular source + also for test runner too. So would it be possible to isolate benchmark code from project main code? - [ ] Do we need individual bench case setup/teardown capabilities? Or global init function which specific for a bench suite...
- [ ] c3c has bench and benchmark commands, and also target type bench is confusing. I think we need more clarity there, maybe with better docs. I tried to add
benchmarkorbenchtarget intoproject.jsonbut failed. - [ ] Do we need bench test filters as we have in test runner? It would be nice to have.
- [ ] More a cosmetic request: probably it would be better to display number of steps and explicitly state that timing number as average per 1 function call.
- [ ] Current timing method is pretty much coarse, because it prone to OS scheduler pauses. For example, in between of bench function run OS might do a very long re-scheduling (hundreds of milliseconds), which affect average numbers of a timer. There are some techniques to overcome this. Also, one might want to have a benchmark of cold cache, which is also useful of understanding how code could perform in different environments.
Let's discuss how to deal with above.
- It seems wrong that tests and benchmarks are in the same folder? But benchmarks could have its own folder certainly.
- You can guard an initializer with
@if(env::BENCHMARKING). It will then only be compiled when benchmarks are run. - I am not sure, I didn't implement the first benchmark runner myself, and it's fairly low priority for me.
- Target type bench and benchmark commands are indeed confusing. There is a similar problem with tests. See #1651
- Benchmark filters are indeed being requested,
- It could be improved indeed
- Correctly benchmarking is indeed difficult. I have pondered removing the benchmarking feature entirely.