speed-comparison
speed-comparison copied to clipboard
Provide "clean" and "real world" results
As suggested in #51 by @HenrikBengtsson more "clean" data for the calculation of pi could be gathered by measuring the performance of each language with and without calculating pi and then subtracting the one from the other.
[...] I think it would be better if you could find a way to not include the startup times, and the parsing of 'rounds.txt' in the results. A poor man's solution would be to benchmark each language with and without the part that calculates pi and the subtract to get the timings of interest.
I think it would be best to keep both data. "Real world" data with startup and IO, and "clean" data for just calculating pi. I would keep both data in the CSV, but I'm not sure which one to favour for the image creation. Probably the "clean" data 🤔
In terms of implementation. I can see two approaches:
- the straightforward way of having a second implementation file for each language
- the dynamic way of including a comment and keyword within the source file to cut off for getting the result without calculating pi
Obviously, both would require adjustments to scbench
and the analysis step.
I believe setting rounds to 0 should be about equivalent
@francescoalemanno that is a brilliant idea. Would at least make things way easier to implement.
@niklas-heer
I reimplement the benchmark for C++, Java, Golang, Python and JavaScript: https://github.com/Glavo/leibniz-benchmark
I run twenty rounds of benchmarking and count the average time spent on the last ten rounds. Here is the result I got:
I think the "clean" result is the one that can reflect the real world situation.
The main factor affecting the results now is the startup and loading time, not the real performance of the language.