feat: implement default rate limit configuration
Follow up to #11646.
This PR adds default rate limits to a node. The values are chosen in a way to minimize the risk of cutting legit traffic while still providing some safeguard against bad actors. See also the original proposal.
To increase the confidence before merging I propose:
- Run a mainnet canary-like for 1-2 weeks and observe no rate limits are ever hit
Question: Are there other nodes configuration that use a difference shape of traffic, and thus might hit these limits?
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 71.52%. Comparing base (
1eeee00) to head (eb6f0ae).
Additional details and impacted files
@@ Coverage Diff @@
## master #11684 +/- ##
==========================================
+ Coverage 71.49% 71.52% +0.02%
==========================================
Files 810 810
Lines 164010 164065 +55
Branches 164010 164065 +55
==========================================
+ Hits 117261 117346 +85
+ Misses 41657 41633 -24
+ Partials 5092 5086 -6
| Flag | Coverage Δ | |
|---|---|---|
| backward-compatibility | 0.23% <0.00%> (-0.01%) |
:arrow_down: |
| db-migration | 0.23% <0.00%> (-0.01%) |
:arrow_down: |
| genesis-check | 1.33% <0.00%> (-0.01%) |
:arrow_down: |
| integration-tests | 38.44% <74.62%> (+<0.01%) |
:arrow_up: |
| linux | 71.30% <100.00%> (+0.02%) |
:arrow_up: |
| linux-nightly | 71.10% <100.00%> (+0.01%) |
:arrow_up: |
| macos | 54.35% <52.54%> (+0.65%) |
:arrow_up: |
| pytests | 1.60% <0.00%> (-0.01%) |
:arrow_down: |
| sanity-checks | 1.40% <0.00%> (-0.01%) |
:arrow_down: |
| unittests | 65.62% <95.52%> (+0.04%) |
:arrow_up: |
| upgradability | 0.28% <0.00%> (-0.01%) |
:arrow_down: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I have deployed a mainnet "canary-like" node with this build: link
I checked manually that config overrides trigger rate limiting. Next I'll leave the node running for a while and monitor the metric near_peer_message_rate_limited_by_type_total.