benchmarking
benchmarking copied to clipboard
Simplify multiple scripts, logs in nodejsbenchmarking infrastructure
As I was working to add node-dc-eis to the nodejsbenchmarking, I realized the unnecessary complexity we have due to multiple scripts scatter in each benchmark. Frankly I think it's not sustainable as we add more more tests/benchmarks. I can contribute to this task if we as a group think it's important.
Here is a list of scripts in use today, fp.sh (function to get memory footprint, rss, heap, etc) cpuParse.sh (cpu statistics per process, node, any client and mongod) mongodb.sh (start mongodb instance) kill_node_linux (stops all node processes), and some 'awk' scripts. In addition there are scripts per workload/benchmark (run_acmeair.sh, run-dc-eis.sh, etc.)
On top of that, each of these script generate their own log output in different directory, processes it for whatever output it needs.
I'm proposing to put these scripts in one place, get sourced as needed and generate output in one log file.
Here is my initial assessment of current execution flow,
- Preparation - setting up workspace
- Build a node version
- If test needs database,
- start a database (eg. mongodb)
- Start benchmark
- Start a client
- Wait for fixed timeout
- Stop everything after waiting period
- Post results
- Archive results
- Cleanup workspace
Data collection today,
- Memory footprint before the run (fp.sh)
- CPU stats for each process, viz. node, client (if needed), database (if needed) during the run (cpuParse.sh)
- Memory footprint after the run (fp.sh)
- Throughput (ops/sec)
- Latency
@mhdawson Can we add this to the agenda for #245?
@davisjam done