py_ecc
py_ecc copied to clipboard
Benchmarking the Library
What is wrong?
I personally think that it would be better if there were some benchmarks
added to this library, so that to make sure that we are not altering any optimizations made etc. while adding new functionalities. Further this is a pretty primitive module where I believe speed matters.
How can it be fixed
I think we could something similar to what we are doing here
@hwwhww @ChihChengLiang do you have any suggestions for the Benchmarking Tests
. Right now, all I can think of regarding benchmarking is Pairing
. Any new tests would be helpful.
/cc @pipermerriam @carver
Some idea I can come up with now. Most of these can only start after the migration of BLS aggregation API from Trinity to here.
Benchmark that focuses on the bottleneck
This protects the most important concerns, so we don't worry about breaking things.
Example:
- Verify 2/3 of 312,500 validators' signature aggregation.
- Verify a crosslink.
References for signature aggregation and crosslink calculation https://ethresear.ch/t/pragmatic-signature-aggregation-with-bls/2105
Benchmark that gives insights on APIs
This helps the calculation to use cases more straight forward.
Example:
- sign
- verify
- verify_multiple
- etc...
Benchmark that gives insights on units
So we can be more sensitive to the performance change.
Example: This page shows how performance changes for each operation over time. https://speed.z.cash/timeline/#/?exe=1,2&base=1+9&ben=bls12_381::ec::g1::bench_g1_add_assign&env=1&revs=50&equid=off&quarts=on&extr=on