Create infrastructure for simulating retro funding algorithms
What is it?
The Retro Funding repo currently has some utilities for testing model weights and simulating results, but it's pretty manual. The user must also learn via trial and error which changes get them closer to their objective (eg, a flatter distribution, an under-valued devtool doing well, etc).
We should create a proper simulation framework, with a formal way of configuring simulations and measuring their impact on reward distributions.
Here is an initial set of simulation parameters that a user might want to explore:
- The set of projects in the round: what if Uniswap drops out, what if Zora comes back in
- Pre-trust data about those projects: what if we use include new signals, eg, past grants from token house. (I would include the devtooling labels in this category.)
- Events between projects: a dev connection in May 2025 vs 2024, a big project imports a new package.
- Algo design / business logic: EigenTrust vs PageRank
- Weights (anything in the YAML, including budget caps and alpha values)
Right now most of the focus has been on 5.
Here's the state-of-the-art in terms of manual simulation analysis 😉
https://docs.google.com/spreadsheets/d/1y5pTWKPtbxfvAoEIlmT6_76gL9CLW6XUxuBbDOodMis/edit?usp=sharing