hurl
hurl copied to clipboard
Benchmark mode
--report-json and --repeat provide already the building blocks for getting some metrics and process them.
Would be nice to automate it so a --benchmark option could provide a report like those provided by hyperfine telling what's different between two runs and what's the distribution of timings/latency if --repeat is passed.