feat: add `waitAsRateLimit` option on `http` transport
This PR introduces an option to enable a batch queue system by setting batch.waitAsRateLimit to true. I was searching for a solution until I discovered #1305, which motivated me to explore ways to update the HTTP client.
The core idea is to modify the batchScheduler by changing the shouldSplitBatch parameter to a getBatchSize parameter. Once the scheduler can determine the batch size, it becomes capable of queuing requests.
I also updated multicall to utilize getBatchSize with behavior similar to the previous version.
The main drawback is that if too many requests are queued, the queue could continuously grow, potentially causing delays or triggering cache limits. I have added a warning about this in the documentation.
Therefore, for users interacting with rate-limited endpoints, this option could be extremely beneficial for managing such interactions.
⚠️ No Changeset found
Latest commit: f7e671443380347c5a65e43fe8afa9dd56a9f4fe
Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.
This PR includes no changesets
When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types
Click here to learn what changesets are, and how to add one.
Click here if you're a maintainer who wants to add a changeset to this PR
@iyarsius is attempting to deploy a commit to the Wevm Team on Vercel.
A member of the Team first needs to authorize it.
Thanks, will review!