Create benchmarks to measure overhead of this against `println!`
This crate does some stuff on top of just writing to some output. We should write some benchmarks to figure out the performance cost of that.
- Write bechmark to measure time to print n lines of example text (
output.print("lorem ipsum")vsprintln!("lorem ipsum"))- [ ] to stdout using the "human" target
- [ ] to stdout using the "human" target's test helper
- [ ] to a file using the "json target
- [ ] print to stdout using
println! - [ ] print to stdout using
writeln!on a locked stdout - [ ] print to stdout using
writeln!on a buffered stdout
I'm trying to handle this issue but have encountered some problems.
- How to target to stdout with "human" target's test helper?
println!'s output will captured by bencher and can't pass--nocapturelike incargo testcommand so it's not a fair comparison.
Hey! Sorry it took me a while to get around to answering this. You're asking some good (and tough!) questions, let me try to answer them and describe what I'd do :)
To make this as realistic as possible, I'd try to benchmark not functions running the code described above, but execute compiled programs. This will of course include the overhead of executing a process, but since it's a mostly constant overhead, we can measure it and, assuming our benchmark programs run long enough, we will still get valuable data. Additionally, we can easily capture the output.
So, wriring a benchmark might look like this:
- create a temp dir
- add a cargo.toml file with the required dependencies, and package name "bench"
- add a main.rs file with the code to benchmark
- run
cargo build --release - then, in each bench iteration run a subcommand like
your/temp/dir/target/release/benchand wait for it to finish (but don't capture the output)- to test colored output, you might need to set some environment variables (see termcolor crate)
Hope this helps! I've just written this on my way to office, so I probably forgot some things. Please comment here if you have any questions!