benchmarks
benchmarks copied to clipboard
Support Cold/Hot Runs, Offline Only Mode and Environment repeatability
Ensure robust benchmarking
Cold/warm/hot runs
- Differentiate between **cold** and **hot** runs
- Cold runs: Flush OS and CPU caches
- Hot runs: Ignore initial runs
Offline 'Mode'
- Fetch SOLC Binaries from on-disk prefetched directory
- Prefetch git repo submodules for foundry repos before recording time for benchmarking tool
Environment repeatability
An entry point script that setups the host environment before running can help ensure repeatability, as an example:
#!/usr/bin/env bash --noprofile --norc -eo pipefail {0}
# Since bash 5.0, checkwinsize is enabled by default which does
# update the COLUMNS variable every time a non-builtin command
# completes, even for non-interactive shells.
# Disable that since we are aiming for repeatability.
test -n "$BASH_VERSION" && shopt -u checkwinsize 2>/dev/null
# For repeatability, reset the environment to known value.
# TERM is sanitized below, after saving color control sequences.
LANG=C
LC_ALL=C
PAGER=cat
TZ=UTC
COLUMNS=80
export LANG LC_ALL PAGER TZ COLUMNS
EDITOR=:
We are already prefetching solc and building all contracts before we start benchmarking. I'm not sure I really understand the purpose of the environment repeatability script? What exactly is it guarding against?
Thanks! I think the difference here would/should be relatively minor. We expect timeouts to be in the range of 30-5mins, so such prefetching would/should make very little difference I think? Let me know what you think, and please close if you think this explanation is sufficient :) Thanks!