Add Capped tally to limit votes when approval might change drastically
Whale mitigation in OpenGov
This PR introduces an idea to mitigate the big influence users with big capital can have when voting on referenda with relatively low support with a follow up topic to avoid gaming the solution.
Capped votes
Users voting with large amount of funds(conviction and delegations included) can change the approval ratio considerably with a single vote, this has proven be problematic as single individuals not only can decide the fate of entire proposals but also introduce a big voting bias as future voters might follow the new trend blindly without realizing it was set by a single voter instead of the entire community.
Here I propose a Capped struct that implements VoteTally and wraps Tally to check the methods that mutate the internal ayes, nays data of the struct, limiting the mutation so that the approval after the vote doesn't exceed the defined MaxVoteSlippage percentage(not sure if slippage would be the right name).
The concept of a whale therefore becomes relative, having the nice side effect(IMO) that early whales might want to keep coming back to review the proposal later on to cast their vote again with an updated conviction(also giving teams the change to improve their proposal and persuade early retractors), or whales would simply have the incentive to vote later on reducing the bias they introduce with their heavy conviction.
Whale alert track
The previous method could be gamed with automation spreading funds across multiple accounts. My second proposal would be to have a track in governance that allows citizens to report malicious/anti democratic behavior of a set of accounts(i.e. the accounts controlled by the whale), the reporter provides enough evidence to the general community or to a specific origin that looks after the community's code of ethics(Alliance?) and if approved the reported accounts would lose all of the funds used for the vote that would get transferred to the reporter. The reporter also places a deposit that could be lost in case the accounts are reported unfairly without sufficient proof.
The CI pipeline was cancelled due to failure one of the required jobs. Job name: cargo-check-benches Logs: https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/2299858
Aside from the question of whether whales should or shouldn't have such power over referenda, this is still incredibly foolish since it wouldn't allow anyone to vote on referenda. Assuming that this code will work and that I understand the intent, the first voter on a referendum would change the approval by infinity percent, therefore no one would ever be able to cast a vote.
So let's say you try to fix that issue by making the MaxVoteSlippage a function of the number of votes, with a vertical asymptote at x=1, allowing a first vote. How should that function decrease? What should the slippage threshold be for the second vote? The third vote? The 10th, 100th, or 1000th vote? As you can see, this is a non-trivial configuration.
This concept simply does not work at all.
Regarding the question of whale influence, I truly believe that users with the largest token balances should have the most power. If I go out and by $1,000,000 worth of KSM and lock that KSM with 6x conviction on a referendum, you'd better believe I'll want that reflected in the tally no matter what. Users with the largest balance have the most to gain or lose, so in theory they should support proposals that will improve the network and reject proposals that will make it worse. They should also be among the most educated on referenda because of how consequential their votes are; they wouldn't want to vote AYE on a catastrophic referendum that would make their investment go to zero and they wouldn't want to vote NAY on a referendum that will revolutionize and improve the technology.
Despite being a foolish attempt to fix your perceived problem, you fundamentally misunderstand the underlying assumptions of the network. People do not exist as constituents, only tokens. Actors which a large amount of tokens at stake control the network. They have the most to win and lose, so they have the biggest say in the direction of the network.
You cannot find a way around this.
It might be a naive and foolish approach but I think there is definitely a problem to be solved, it could be that we need tweaking conviction multipliers and locking periods or do something entirely different but I refuse to believe our decentralized networks will end up at the mercy of a select few. This huge disparity between the average voter and those with big stake create a very visible effect of manipulating people's opinion. We clearly saw in our two proposals and others how high approval persuades voters go for aye and similarly when approval is low how default becomes nay.
As you can see, this is a non-trivial configuration.
We don't need to make it that complex, perhaps a second parameter to know when to start checking? special condition when no vote has been cast? I'm sure we can figure it our 😉 Anyway I'll hold on for now with the implementation to gather more feedback. If later after ironing out any unsoundness of the implementation but the solution ends up still being so controversial that it can't be merged, It'll be exiting to also try out the first runtime upgrade built outside Parity don't you think? We're always ready to bring some chaos to our loved canary 🔥
People do not exist as constituents, only tokens.
I guess it'll be inevitable to make this a philosophical topic but it seems we are here for different reasons and see the system in a very different way. Tokens are held by people, we build systems for people to solve big problems of people, it is(should) always be about the human behind, we have to care for the common good, not for the interest of a few. If you buy $1,000,000 worth of KSM and expect it to get you everything you want I'd say you are a fool and perhaps should move your money elsewhere, or perhaps its us the idealists that think we can build fairer societies that should move elsewhere.
Closing for now. I'll revisit the topic of "power balancing" with a different less controversial approach :)