youki
youki copied to clipboard
Determine a benchmark comparison table format for the README
This is a follow-up issue for #464, which we have to implement it in first place. Since that issue is about how to implement benchmarking method in right way, whereas this issue is about how can we visualize a fancy and simple table in the README. Just dropping the idea here, so we do not forget!
@utam0k measured the performance using the hyperfine. Since hyperfine is really cool tool to measure benchmark, we can add some valuable information such as:
- CPU: min, max, mean
- MEM: min, max, mean
We can create many tables for each command:
runtime | release | cmd: create | diff | winner |
---|---|---|---|---|
youki | v1 | mem: (min: x, max: y), cpu: (min: x, max: y), time: (min: x, max: y, mean: z) | +%2 | youki |
runc | v2 | mem: (min: x, max: y), cpu: (min: x, max: y), time: (min: x, max: y, mean: z) | +%3 | runc |
crun | v3 | mem: (min: x, max: y), cpu: (min: x, max: y), time: (min: x, max: y, mean: z) | +%4 | crun |
- runtime: name of the runtime
- release: perma link to release tag or commit
- cmd: which command is tested
- diff: how fast compared to other ones
- winner: who won
What do you think?
@Dentrax
Thanks for your interest!
This table is very cool. however, the winner
column may be a bit extreme, so I think we remove it. As for the superiority, just looking at the numbers should be enough.