ape-alchemy
ape-alchemy copied to clipboard
feat: add rate limit and faster default speed
What I did
add rate limit, implementing similar logic to alchemy-web3 but with exponential falloff which the alchemy docs tell me is preferred
fixed: #60
also fixes ape issue #613 which suggests adding a rate limit as another mix-in for ProviderAPI
. not sure what that means, but this modifies a sub-class of a sub-class of ProviderAPI.
How I did it
for loop around _make_request call, if a call is successful it returns it, otherwise keeps going. if the for loop ends, it raises an error saying it's rate limited.
How to verify it
I only checked this against an intro notebook called hello_ape.ipynb but i don't see why it shouldn't work generally. In the two timed queries this is 6x (1.98s/0.335s) and 13x (202s/15.9s) faster.
Checklist
- [ ] Passes all linting checks (pre-commit and CI jobs)
- [ ] New test cases have been added and are passing
- [ ] Documentation has been updated
- [x] PR title follows Conventional Commit standard (will be automatically included in the changelog)
incorporated the remaining comments
checked that it still works with latest ape version 0.5.7.dev5+gf8d4ab07 using this test notebook https://github.com/wakamex/ape-stuff/blob/main/hello_ape.ipynb you can see the alchemy provider config is basically empty right now (4th cell):
provider.config of type=<class 'ape.api.config.PluginConfig'>
env_prefix =
env_file = None
env_file_encoding = None
env_nested_delimiter = None
secrets_dir = None
validate_all = True
extra = forbid
arbitrary_types_allowed = True
case_sensitive = False
prepare_field = <classmethod(<cyfunction BaseSettings.Config.prepare_field at 0x0000015911EE97D0>)>
customise_sources = <classmethod(<cyfunction BaseSettings.Config.customise_sources at 0x0000015911EE98A0>)>
parse_env_var = <classmethod(<cyfunction BaseSettings.Config.parse_env_var at 0x0000015911EE9970>)>
Found this cool package today: https://pyratelimiter.readthedocs.io/en/latest/
love this! what is status? would hate to see this stale...
love this! what is status? would hate to see this stale...
Quite stale, needs another champion u less @wakamex comes back and implements the feedback. Just a few things to add, and can also implement this for other providers
love this! what is status? would hate to see this stale...
Quite stale, needs another champion u less @wakamex comes back and implements the feedback. Just a few things to add, and can also implement this for other providers
i have been summoned! been pretty heads down on other stuff since December.. but I learned a lot about incorporating PR comments since then 😅
lemme look at this again, I'll ping in discord with any questions
I learned how to use custom configs!
added concurrency
and block_page_size
to the custom config, so we can see nice outputs like this:
print(f"provider.config of type={type(context.provider.config)}")
for k,v in context.provider.config.__dict__.items():
if not k.startswith('__'):
print(f" {k} = {v}")
provider.config of type=<class 'ape_alchemy.provider.AlchemyConfig'>
concurrency = 1
block_page_size = 250000
min_retry_delay = 1000
retry_backoff_factor = 2
max_retry_delay = 30000
max_retries = 3
retry_jitter = 250
I set block_page_size
to a really big number, though I don't see any reason it shouldn't be infinite, since it seems to act as an upper limit. Here are my test results:
default:
alchemy through ape took 78.44 seconds for 1291 events (events per second: 16.46)
block_page_size = 50_000:
alchemy through ape took 0.85 seconds for 35 events (events per second: 41.09)
alchemy through ape took 2.37 seconds for 1290 events (events per second: 544.34)
alchemy through ape took 61.28 seconds for 40281 events (events per second: 657.27)
block_page_size = 10_000:
alchemy through ape took 0.84 seconds for 35 events (events per second: 41.64)
alchemy through ape took 1.72 seconds for 1290 events (events per second: 750.73)
alchemy through ape took 29.22 seconds for 40281 events (events per second: 1378.50)
block_page_size = 50_000:
alchemy through ape took 0.88 seconds for 35 events (events per second: 39.91)
alchemy through ape took 1.09 seconds for 1290 events (events per second: 1179.53)
alchemy through ape took 13.13 seconds for 40281 events (events per second: 3067.61)
INFO (ape-alchemy): Alchemy compute units exceeded, retrying, attempt 1/3 in 1000 ms
INFO (ape-alchemy): Alchemy compute units exceeded, retrying, attempt 1/3 in 1000 ms
INFO (ape-alchemy): Alchemy compute units exceeded, retrying, attempt 1/3 in 1000 ms
INFO (ape-alchemy): Alchemy compute units exceeded, retrying, attempt 1/3 in 1000 ms
INFO (ape-alchemy): Alchemy compute units exceeded, retrying, attempt 1/3 in 1000 ms
INFO (ape-alchemy): Alchemy compute units exceeded, retrying, attempt 1/3 in 1000 ms
INFO (ape-alchemy): Alchemy compute units exceeded, retrying, attempt 1/3 in 1000 ms
INFO (ape-alchemy): Alchemy compute units exceeded, retrying, attempt 1/3 in 1000 ms
alchemy through ape took 57.42 seconds for 123991 events (events per second: 2159.28)
block_page_size = 250_000:
alchemy through ape took 0.64 seconds for 35 events (events per second: 54.67)
alchemy through ape took 1.18 seconds for 1290 events (events per second: 1092.30)
alchemy through ape took 12.72 seconds for 40281 events (events per second: 3165.94)
alchemy through ape took 39.44 seconds for 123991 events (events per second: 3143.90)
infura default:
infura through ape took 0.46 seconds for 35 events (events per second: 76.43)
infura through ape took 1.01 seconds for 1290 events (events per second: 1276.06)
infura through ape took 13.15 seconds for 40281 events (events per second: 3062.54)
infura through ape took 41.21 seconds for 123991 events (events per second: 3008.86)
removed class fields since every Alchemy
class should have access to the AlchemyConfig
I believe. Also removed network_uris
since mypy didn't complain.
not all requests use the internal
_make_request()
method: some use higher-level web3.py methods which have their own make_request within web3.py
which other methods need testing? I only tested this with query
:
curve_steth = Contract("0xDC24316b9AE028F1497c275EB9192a3Ea0f67022")
events = curve_steth.TokenExchange.query("*", start_block=chain.blocks[-1].number - int(86400*365/12)) # last 1 year
did you just merge your main into my main?
try me now
Note: rate limit test would be hard and exhaust our already limited requests
Note: rate limit test would be hard and exhaust our already limited requests
this implies you can't test Python logic without making live network requests. I am not saying I am requiring tests, but I am calling this out as a false excuse.
with big block_page_size
we can rest with 1 request that gets lots of
data. but since Alchemy uses "compute units" it'll still count lots toward
your rate limit. but I doubt you use your Alchemy API key a ton?
On Fri, Jan 26, 2024, 4:12 p.m. antazoey @.***> wrote:
Note: rate limit test would be hard and exhaust our already limited requests
this implies you can't test Python logic without making live network requests. I am not saying I am requiring tests, but I am calling this out as a false excuse.
— Reply to this email directly, view it on GitHub https://github.com/ApeWorX/ape-alchemy/pull/35#issuecomment-1912701836, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEBUCYR677IDESEFTQZMEB3YQQL2HAVCNFSM6AAAAAARQZE2OCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJSG4YDCOBTGY . You are receiving this because you were mentioned.Message ID: @.***>
Note: rate limit test would be hard and exhaust our already limited requests
this implies you can't test Python logic without making live network requests.
That's fair, can unit test it with mocks and that works
with big
block_page_size
we can rest with 1 request that gets lots of data. but since Alchemy uses "compute units" it'll still count lots toward your rate limit. but I doubt you use your Alchemy API key a ton?
We do use it quite a bit for some other plugins too, plus our own private testing use, so it's been a problem in the past
found a benchmark to justify infinite block page size
connection = "alchemy through ape"
context = networks.parse_network_choice('ethereum:mainnet:alchemy')
context.__enter__();
legacy_abi = '[{"anonymous": false,"inputs": [{ "indexed": false, "name": "postTotalPooledEther", "type": "uint256" },{ "indexed": false, "name": "preTotalPooledEther", "type": "uint256" },{ "indexed": false, "name": "timeElapsed", "type": "uint256" },{ "indexed": false, "name": "totalShares", "type": "uint256" }],"name": "PostTotalShares","type": "event"}]'
oracle1 = Contract("0x442af784A788A5bd6F42A01Ebe9F287a871243fb", abi=legacy_abi) # steth legacy oracle
v2_abi = '[{"anonymous": false,"inputs": [{ "indexed": true, "name": "reportTimestamp", "type": "uint256" },{ "indexed": false, "name": "timeElapsed", "type": "uint256" },{ "indexed": false, "name": "preTotalShares", "type": "uint256" },{ "indexed": false, "name": "preTotalEther", "type": "uint256" },{ "indexed": false, "name": "postTotalShares", "type": "uint256" },{ "indexed": false, "name": "postTotalEther", "type": "uint256" },{ "indexed": false, "name": "sharesMintedAsFees", "type": "uint256" }],"name": "TokenRebased","type": "event"}]'
steth = Contract("0xae7ab96520de3a18e5e111b5eaab095312d7fe84",abi=v2_abi) # Lido v2 main contract
# %%
# local node through ape took 41.28 seconds for 1011 events (events per second: 24.49)
# 250k block page size: alchemy through ape took 10.33 seconds for 1012 events (events per second: 97.95)
# 2.5m block page size: alchemy through ape took 1.60 seconds for 1012 events (events per second: 631.47)
# 25m block page size: alchemy through ape took 0.90 seconds for 1012 events (events per second: 1128.32)
# infura through ape took 9.62 seconds for 1012 events (events per second: 105.17)
start_time = time.time()
events = oracle1.PostTotalShares.query("*")
print(f"{connection} took {time.time()-start_time:.2f} seconds for {len(events)} events (events per second: {len(events)/(time.time()-start_time):.2f})")
# %%
# local node through ape took 69.69 seconds for 267 events (events per second: 3.83)
# 250k block page size: alchemy through ape took 11.92 seconds for 268 events (events per second: 22.48)
# 2.5m block page size: alchemy through ape took 1.34 seconds for 268 events (events per second: 200.64)
# 25m block page size: alchemy through ape took 0.90 seconds for 1012 events (events per second: 1128.32)
# infura through ape took 9.39 seconds for 268 events (events per second: 28.55)
start_time = time.time()
events2 = steth.TokenRebased.query("*")
print(f"{connection} took {time.time()-start_time:.2f} seconds for {len(events2)} events (events per second: {len(events2)/(time.time()-start_time):.2f})")
how would I mock up some tests? with like a MockLaggyProvider to connect to?
how would I mock up some tests? with like a MockLaggyProvider to connect to?
Using pytest.mock, you can basically mock anything in Python land. Sometimes I mock web3 and then configure some side-effect responses which ca simulate waiting.
If it is too complex, I'd be happy to take a swing at it and share how I did it
I can take a look, pytest.mock sounds like it should be easy.
started mocking up a test for the exponential backoff
oh, and I noticed a drawback of setting the page size really high (when there are more than X results, alchemy throws an error)