graphql-relay-js
graphql-relay-js copied to clipboard
[RFC] Changes to connections spec and implementation
This will be a long one, please bear with me 😅 .
All ideas here are mostly relevant to pagination, and thus only the connections part of the Relay spec.
After much thought, going over #41, #58, #91, connection.js & the connection spec, and implementing the spec's algorithms for RethinkDB. I've come to a couple of ideas that I would love to hear some comments on (both from the community and the spec and GraphQL.js implementors).
If the ideas expressed here are of interest, I'd be honored to submit a PR.
Let's start with the simple ones:
Spec:
- The spec doesn't require that the
cursorfield be non-null for the edge type. Should every node have a cursor? It would make sense that if a node exists then there is a unique opaque identifier that can serve as a cursor. Which brings me to the next thought... - In #41 @olegilyenko brought up the idea of
nodeTypebeing always nullable in the implementation. Would it make more sense for the node field in edges to be non-null? - In the same vein as the 2 previous thoughts, would it make more sense that the edges field of a connection be a non-null list of non-null edges (currently it is a list of edges)?
pageInfois required and its fieldshasNextPageandhasPreviousPageare required as well but these lose all meaning if theedgesfield can be null. - Finally (regarding the spec), I propose a few changes to the pagination algorithms in the spec to support bidirectional pagination as discussed in #58. The algorithm is described at the end of this post.
Implementation:
- Implement any or all of the changes mentioned above.
- Currently
firstandlastare validated inarrayconnection.js(i.e., in the implementation of the paging algorithm). Would it make more sense to have this validation happen at the schema level (what I think the spec intends) by implementing anUnsignedIntscalar and using that as arguments inforwardConnectionArgs,backwardsConnectionArgsandconnectionArgs? - Would it be useful to implement a function that validates a schema by making sure that the cursor type used to query connections is the same as the cursor type that the connection's edges has?
- The spec notes that a cursor can be any scalar that serializes to a string. Currently, this implementation forces cursors to be a string. Expanding on #91, would it make sense to add a
cursorTypeparameter toconnectionDefinitionsthat would then use the given cursor type instead of aGraphQLString?forwardConnectionArgs,backwardsConnectionArgsandconnectionArgscould then be returned fromconnectionDefinitionswith the appropriate cursor type (removing the need of the previous thought of adding a schema validator... as long as the developer does use the returned objects).
And that's it! Again, I will most likely be implementing most of these for our backend so I wouldn't mind at all to submit a PR. Also, I do understand that these might not be useful to others but I thought they could be so I wanted to hear people's thoughts.
Thanks for reading this huge wall of text. Please comment, criticize, and ask away! 🍺
Pagination Algorithm
... finally, here's the updated pagination algorithm. It should prove trivial to implement with any database as long as there is a way to get the count of records (I successfully implemented it in RethinkDB and can post that source code here if people want to see it). This would solve #58.
Arguments: allEdges, after, before, first, last
- Initialize edges to be allEdges.
- Let count be the number of elements in edges.
- Let afterIndex be the index of the edge in edges whose cursor is equal to the after argument or -1 if not found.
- If afterIndex is not -1:
- Remove all elements of edges before and including afterIndex.
- Let beforeIndex be the index of the edge in edges whose cursor is equal to the before argument or -1 if not found.
- If before is set:
- Remove all elements of edges after and including beforeIndex.
- Let beforeCount be the number of elements in edges.
- If first is set:
- If first is less than 0: (this check wouldn't be necessary if we ensure first is non-negative)
- Throw an error.
- If edges has length greater than than first:
- Slice edges to be of length first by removing edges from the end of edges.
- If first is less than 0: (this check wouldn't be necessary if we ensure first is non-negative)
- Let actualFirst be the number of elements in edges.
- If last is set:
- If last is less than 0: (this check wouldn't be necessary if we ensure first is non-negative)
- Throw an error.
- If edges has length greater than than last:
- Slice edges to be of length last by removing edges from the start of edges.
- If last is less than 0: (this check wouldn't be necessary if we ensure first is non-negative)
- Let actualLast be the number of elements in edges.
- If afterIndex is not -1:
- Let hasPreviousPage be true.
- If afterIndex is -1:
- If beforeIndex is 0: Let hasPreviousPage be false.
- If beforeIndex is not 0:
- If actualFirst is greater than actualLast:
- Let hasPreviousPage be true.
- If actualFirst is less than or equal to actualLast:
- Let hasPreviousPage be false.
- If afterIndex is equal to count - 1:
- Let hasNextPage be false.
- If afterIndex is not equal to count - 1.
- If beforeIndex is not -1:
- Let hasNextPage be true.
- If beforeIndex is -1:
- If actualFirst is less than beforeCount:
- Let hasNextPage be true.
- If actual first is greater than or equal to beforeCount:
- Let hasNextPage to be false.
- If beforeIndex is not -1:
Returns: edges, hasPreviousPage, and hasNextPage.
There might be a more succinct way to write the algorithm but I'm 100% sure this works and I used the one from the spec as a baseline.
Any update on this?
Regarding the non-nullability suggestions, that would mean that a error resolving the node within any edge in a connection would always propagate to invalidate the entire connection, and that there would be no flexibility on the implementing side to relax that behaviour while still conforming.
As far as I know, right now nothing prevents a schema from asserting non-nullability for those fields on a particular edge type, if it wants or needs to. But to enforce that at the spec level for all edge types seems particularly restrictive in a very brittle way.
@erydo, that's a very good point! I do agree that having the errors invalidate the entire connection is undesirable. Perhaps the non-nullability suggestions for nodes or edges aren't very good. 😅
I'm sure there would be a benefit to some of the non-nullability constraints as long as it doesn't force a node error to invalidate the entire connection.