graph-client
graph-client copied to clipboard
Auto pagination does not work with nested first
Hey @ardatan,
Auto pagination doesn't do anything when first is nested, these should be in the 10s of thousands.


Could you share a reproduction on CodeSandbox or StackBlitz? Thanks!
Could you share a reproduction on CodeSandbox or StackBlitz? Thanks!
Have you got a repro template for CodeSandbox or StackBlitz I can fork by anychance?
Currently we don't have any. I can create one for you if you have issues with those?
Currently we don't have any right now. I can create one for you if you have issues with those?
Yeah, if you have a basic template I could use for repros that would be great.
I created this one but not sure if you use it as an SDK with Node, Browser, ApolloClient or Urql etc. https://codesandbox.io/s/hardcore-antonelli-7e6mlr?file=/example-query.graphql
I think this PR covers the use case; https://github.com/graphprotocol/graph-client/pull/163
Available in the latest release! https://github.com/graphprotocol/graph-client/pull/164#issue-1315214243
Thanks @ardatan! I notice when setting first to >5000, I start seeing these errors:
AggregateError: The skip argument must be between 0 and 5000, but is 6000,
The skip argument must be between 0 and 5000, but is 7000,
The skip argument must be between 0 and 5000, but is 8000,
The skip argument must be between 0 and 5000, but is 9000,
The skip argument must be between 0 and 5000, but is 10000,
The skip argument must be between 0 and 5000, but is 11000,
The skip argument must be between 0 and 5000, but is 12000,
The skip argument must be between 0 and 5000, but is 13000,
The skip argument must be between 0 and 5000, but is 14000,
The skip argument must be between 0 and 5000, but is 15000,
The skip argument must be between 0 and 5000, but is 16000,
The skip argument must be between 0 and 5000, but is 17000,
The skip argument must be between 0 and 5000, but is 18000,
The skip argument must be between 0 and 5000, but is 19000,
The skip argument must be between 0 and 5000, but is 20000,
The skip argument must be between 0 and 5000, but is 21000,
The skip argument must be between 0 and 5000, but is 22000,
The skip argument must be between 0 and 5000, but is 23000,
The skip argument must be between 0 and 5000, but is 24000
I guess this is unexpected?
Ah, I see the "skipArgumentLimit" on the auto-pagination transform config. That looks like it'll do the trick.
Unfortuately not, it seems anything above 5k will fail.
Ok so I think it is not a good idea to have multiple queries for the pagination of the nested fields because we will need to do multiple queries for each nested field and this might take forever on the client side. In case of exceeding the skip limit, we have to make subsequential queries which are blocking each other. @dotansimha Not sure if this is what we want to have in Graph Client.
@ardatan is there a reason skip is being used? I know the docs, recommend against using using skip for fetching a large amount of enities, which is what we need in this use case (which is to get all positions for a user/contract).
https://thegraph.com/docs/en/querying/graphql-api/#example-using-and-2
@matthewlilley Thanks to skip, we can still get all the data we need in a single query;
{
users(first: 2500) {
...
}
}
becomes the following so it is way faster then sending multiple queries by waiting each other's "last id"
{
users_0: users(first: 1000) {
...
}
users_1: users(first: 1000, skip: 1000) {
...
}
users_2: users(first: 500, skip: 2000) {
...
}
}
However we do this when skip exceeds 5000. But this is a bit tricky for nested fields because in this case we need to make the same query again and again together with the parent selection sets. And this increases the permutation if you have another pagination in the same query.
This will take forever :) Query -> Get Result -> Make another query -> Get Result
But still I will take a look at the issue with nested fields with 5000+ records
@ardatan Gotcha! The alternative is in the other issue https://github.com/graphprotocol/graph-client/issues/148, but ofc this is problematic as well because you'd need to scan all ID's to get back what you want...