[Vault] Max rebase rate control
Project
Move all yield rate limiting to the vault. This removes the need for the dripper, ensures that all yield is rate controlled, blocks donation attacks against lending platforms, and will allow us more consistent yield auto controlled yield.
Schedule
- [ ] Code + author audit (dvf) Wednesday, March 26
- [ ] Internal code review (sparrowDom), March 28th
- [ ] Internal code review (panda), March 28th
- [ ] Audit (OpenZepelin), March 28th
Code Change Checklist
To be completed before internal review begins:
- [ ] The contract code is complete
- [ ] Executable deployment file
- [ ] Fork tests that test after the deployment file runs
- [ ] Unit tests *if needed
- [ ] The owner has done a full checklist review of the code + tests
Internal review:
- [ ] Two approvals by internal reviewers
| Warnings | |
|---|---|
| :warning: | :eyes: This PR needs at least 2 reviewers |
Generated by :no_entry_sign: dangerJS against 84214c2a49c72f02745bb8d6d0c2c0eb5d96e6b6
There's a lot of tests failing
Code Review - Vault Slow Drip
Requirements
We have two goals: to stop donation attacks and to provide fine grain yield control. This PR takes every source of yield that could rebase out and allows it to be throttled and limited.
The primary objective is to stop donation attacks. This is done by hard camping the maximum amount of yield that can occur in a single rebase, as well as controlling the maximum amount of yield over time. This prevents exchange rate from donations from being big enough to exploit lending platforms.
Secondly, we want to allow for a smooth distribution of yield under normal circumstances. This PR provides two options for this. First, a hard set max cap on the maximum amount of yield can be set. If this is lower than the amount of yield coming in, you essentially have a fixed rate dripper that distributes at a fixed APY. Alternatively, we can use an auto setting dripper that automatically picks rates when we set the frequency that we expect large income events to happen at.
The yield control is under the control of the strategist. however the anti-donation limits are hardcoded into the contract. The Strategist cannot configure a rate that would make a problem for a lending platform.
The drip duration controlled code should make for fairly consistent yield as long as the next yield arrives by the schedule that it's expecting. If it doesn't, yield will begin smoothly going down at the end of this period.
Authentication
- [x] Never use tx.origin
- [x] Every external/public function is supposed to be externally accessible
- [x] Every external/public function has the correct authentication
Ethereum
- [x] Contract does not send or receive Ethereum.
- [x] Contract has no payable methods.
- [x] Contract is not vulnerable to being sent self destruct ETH
Cryptographic code
No crypto code
Gas problems
No for loops
Black magic
No magic.
Overflow
- [x] Code is solidity version >= 0.8.0
- [x] All for loops use uint256
Proxy
- [x] No storage variable initialized at definition when contract used as a proxy implementation.
Events
- [x] All state changing functions emit events
Medium Checks
Rounding and casts
- [x] Contract rounds in the protocols favor
- [x] Each division rounds in the direction of the protocol giving out less yield.
- [x] The fee calculation division rounds against the protocol receiving fees in favor of the user receiving yield. That behavior is acceptable.
- [x] Contract does not have bugs from loosing rounding precision / bugs from zero or near zero amounts
- [x] the rebasePerSecondMax has plenty of resolution before it rounds to zero
- [x] dripduration is not used when set to one or zero.
- [x] We are paranoid about what happens when you mint to the OUSD tokens, and do not assume that totalSupply changes by that exact amount.
- [x] Code correctly multiplies before division
- [x] 🟣 At one point, we do divide yield by dripDuration times two. However, this is effectively dividing by a constant.
- [x] Safecast is aways used when casting
- [x] Safecast is used in all admin functions.
- [x] 🟣 Last rebase is deliberately not safecasted, so that a bazillion years in the future, the code will continue to allow deposits and withdrawals. However it is checked so that there will be no more yield after that point is reached.
- [x] 🟣 Storing rebasePerSecondTarget does not use safe cast but rather minimums. This is so that it does not revert, which would again block deposits and withdrawals.
Dependencies
- [x] Review any new contract dependencies thoroughly (e.g. OpenZeppelin imports) when new dependencies are added or version of dependencies changes.
- [x] Check OpenZeppelin security vulnerabilities and see if any apply to current PR considering the version of OpenZeppelin contract used.
External calls
- [x] Contract addresses passed in are validated
- [x] No addresses passed in.
- [x] No unsafe external calls
- [x] Reentrancy guards on all user facing state changing functions
- [x] No slippage attacks (we need to validate expected tokens received)
- [x] Oracles, one of:
- [x] No oracles
- [ ] Oracles can't be bent
- [ ] If oracle can be bent, it won't hurt us.
- [x] Do not call balanceOf for external contracts to determine what they will do when they use internal accounting
Tests
🔴 We currently do not have unit tests for this new behavior. We have run a variety of one-off fork simulations of various yield scenarios.
- [ ] Each publicly callable method has a test
- [ ] Each logical branch has a test
- [ ] Each require() has a test
- [ ] Edge conditions are tested
- [ ] If tests interact with AMM make sure enough edge cases (pool tilts) are tested. Ideally with fuzzing.
Deploy
- [x] Deployer permissions are removed after deploy
Downstream
This will require an analytics change to use previewYield to determine the next yield rather than subtracting totalSupply from vaultValue. The off-chain team has already been notified of this coming change.
Thinking
Logic
In the event that we distributed a fee and yet decided not to do a final rebase because of the universe being very weird, we would not set lastRebase. However, if this were to happen, you would still not be able to get extra rebases since changeSupply would not be called. Our token code that we are now using exactly adjusts totalSupply according to mint amount, which means that condition should not ever trigger.
The check on newSupply <= supply is not actually a duplicate. It catches the case when there is no yield.
- [x] Correct usage of global & local variables. -> they might differentiate only by an underscore that can be overlooked (e.g. address vs _address).
Deployment Considerations
- [x] A max per second drip rate must be set, or the contract will distribute no yield. This is set in the current deploy file.
Internal State
The two most important variants are:
- ✅ The vault value must never be less than than the new total supply. (Always solvent)
- ✅ The new total supply must never be less than the old total supply. (Always up only)
✅ If there was a drop in the vault value, it's possible that both of these may not be true at the same time. If one of these is not true, we should not rebase.
✅ MAX_REBASE should be a safe value.
✅ rebasePerSecondMax must be equal to or less than MAX_REBASE_PER_SECOND
✅ It would be nice if preview yield only went up between as long as the vault did not lose money. This is roughly true because the only time dependent values affect this are the vault value and the time elapsed, both of which should in theory only go up. Obviously not true 69 bazillion years in the future when yield is disabled.
✅ Preview yield should return the same yield that would actually be used by a rebase that occurred at that moment. It appears to be roughly true.
Attack
An attacker causing a bad rebase is one of the primary attack vectors against OETH. In such a case, the attacker would first mint a large number of tokens, cause a bad rebase, and then either redeem or sell these tokens on AMMs.
Because of this, the rebase function is one of the most critical functions in our code. The rebase function depends on the vault value function being correct. In theory, this set of changes massively improves the security of OETH by adding another layer of defense. In the event an attacker fools a vault strategy into returning an extremely large balance, the amount of damage an attacker can cause is limited to the instantaneous rebase limits set by this code.
The strategist could disable the drip duration smoothing as well as max out the rebasePerSecondMax. However, in theory, this will still be a slow enough change set that lending platforms should be safe, and hack damage mitigated to a few percent of funds.
Flavor
Unfortunately, I am blinded by self-bias on this code. I think it's froody. Others will be able to see flaws that I cannot
A primary goal of this code has been that even if there are horrible errors in the yield calculation code that these errors cannot result in unsafe rebasing. The rebase code is written so that it constrains the next supply to being between the old new supply and the vault value, regardless of the calculations
In turn, the nextYield calculations start with the vault value minus supply difference, and then each successive operation can only reduce the amount of yield from there. As a result, the nextYield function can only return a yield value which is between the vault value and the supply.
Actual rebasing and previewing yield work off the same shared calculations. Previews should be accurate.
Codecov Report
Attention: Patch coverage is 72.72727% with 15 lines in your changes missing coverage. Please review.
Project coverage is 41.16%. Comparing base (
b31963c) to head (84214c2). Report is 21 commits behind head on master.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| contracts/contracts/vault/VaultCore.sol | 76.19% | 10 Missing :warning: |
| contracts/contracts/vault/VaultAdmin.sol | 61.53% | 5 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #2452 +/- ##
==========================================
- Coverage 47.12% 41.16% -5.97%
==========================================
Files 88 96 +8
Lines 4104 4662 +558
Branches 1074 1239 +165
==========================================
- Hits 1934 1919 -15
- Misses 2168 2741 +573
Partials 2 2
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
- :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
Requirements
This PR makes some useful additions to the Vault where vault is able to:
- control the maximum rebase amount possible - to make the token safe for the landing markets preventing any donation attacks.
- the rebase is limited by the maximum one time rebase
- and the rebase per second calculated from the seconds since last rebase
- implement the Dripper logic right into the Vault being able to drip yield at a constant rate if the limits allow for it
- even if strategist is compromised the rebase can not exceed 5% / day
Easy Checks
Authentication
- [x] Never use tx.origin
- [x] Every external/public function is supposed to be externally accessible
- [x] Every external/public function has the correct authentication
Ethereum
- [x] Contract does not send or receive Ethereum.
- [x] Contract has no payable methods.
- [x] Contract is not vulnerable to being sent self destruct ETH
Cryptographic code
- [x] This contract code does not roll it's own crypto.
- [x] No signature checks without reverting on a 0x00 result.
- [x] No signed data could be used in a replay attack, on our contract or others.
Gas problems
- [x] no loops
Black magic
- [x] Does not contain
selfdestruct - [x] Does not use
delegatecalloutside of proxying. If an implementation contract were to call delegatecall under attacker control, it could call selfdestruct the implementation contract, leading to calls through the proxy silently succeeding, even though they were failing. - [x] Address.isContract should be treated as if could return anything at any time, because that's reality.
Overflow
- [x] Code is solidity version >= 0.8.0
- [x] All for loops use uint256
Proxy
- [x] No storage variable initialized at definition when contract used as a proxy implementation.
Events
- [ ] All state changing functions emit events
- ⚪ no event emitted when the
rebasePerSecondTargetchanges
- ⚪ no event emitted when the
Medium Checks
Rounding and casts
- [x] Contract rounds in the protocols favor
- [x] Contract does not have bugs from loosing rounding precision
- ⚪ a really low
dripDuration(say 2 seconds) would cause rounding precision. But the effects would be just a more aggressive rounding down in protocol's favour.
- ⚪ a really low
- [x] Code correctly multiplies before division
- ⚪ we don't when calling
yield / (_dripDuration * 2)but that potentially just acts as slight loss of resolution that isn't impactful
- ⚪ we don't when calling
- [x] Contract does not have bugs from zero or near zero amounts
- [x] Safecast is aways used when casting
- there are a couple of places where the cast is direct. But both of those are safe
Dependencies
- [x] Review any new contract dependencies thoroughly (e.g. OpenZeppelin imports) when new dependencies are added or version of dependencies changes.
- [x] If OpenZeppelin ACL roles are use review & enumerate all of them.
- [x] Check OpenZeppelin security vulnerabilities and see if any apply to current PR considering the version of OpenZeppelin contract used.
External calls
- [x] Contract addresses passed in are validated
- [x] No unsafe external calls
- [x] Reentrancy guards on all state changing functions.
- [x] No malicious behaviors
- [x] Low level call() must require success.
- [x] No slippage attacks (we need to validate expected tokens received)
- [x] Oracles, one of:
- [x] No oracles
- [ ] Oracles can't be bent
- [ ] If oracle can be bent, it won't hurt us.
- [x] Do not call balanceOf for external contracts to determine what they will do when they use internal accounting
Tests
- [ ] Each publicly callable method has a test
- [ ] Each logical branch has a test
- [ ] Each require() has a test
- [ ] Edge conditions are tested
- [ ] If tests interact with AMM make sure enough edge cases (pool tilts) are tested. Ideally with fuzzing.
Deploy
- [X] Deployer permissions are removed after deploy
Downstream
- [ ] no changes to downstream required. Ideally the code could emit the altered
rebasePerSecondTargetin an event and that could be useful to monitoring
Thinking
Logic
Are there bugs in the logic?
- [x] Correct usage of global & local variables. -> they might differentiate only by an underscore that can be overlooked (e.g. address vs _address).
Deployment Considerations
Are there things that must be done on deploy, or in the wider ecosystem for this code to work. Are they done?
Internal State
- Protocol always need to rebase only upwards. This code keeps that behaviour
- Protocol shouldn't rebase more than the backing assets support making it insolvent. This is also ok
- The protocol shouldn't send so much fees to the trustee to cause insolvency. Check for this is also made
With this change the value accrual of the token is controlled using multiple mechanisms: maximum single rebase limit & per second limit. These are all controlled by the governor and also strategist. Even in case of the strategist going rouge there is still a hardcoded limit of 5% / day. Which should be a sufficient safeguard for the lending platforms
Attack
The attack could somehow do a bad rebase. E.g. manipulating some of the underlying strategies to report too big of a value and do a positive rebase. This code imposes extra limits on such actions limiting the profitability of similar type of attacks
Flavor
The code is very clean and readable. It also touches one of the "most core" parts of our protocol, requiring in-depth knowledge of the protocol and its functions. Just reading the code without good understanding of the protocol might leave the reader confused. There are Readme-s that help to understand some of the principles. If we wanted to make our code better understandable, we'd probably need to invest more time into documentation. I am not sure that is high on the priority list.