graphql-tools
graphql-tools copied to clipboard
[batch-delegate] Error paths use their batched indices
Describe the bug
When we batch-delegate a request, error paths use their index in the batched result rather than in the original result.
To Reproduce
Construct a query which batch-delegates the same error at two different paths. See #2951.
Expected behavior
The error is relocated to each corresponding path in the delegating query.
Environment
- OS: N/A
@graphql-tools/...:master- NodeJS: LTS
Awesome find, we tried to solve this with onLocatedError, but that logic is wrong as you have discovered. Fix in the works :)
I think what is actually happening is that all indices are of the first list element sent in
what is actually happening is that all indices are of the first list element sent in
I think you're right. I've added more data to the test and that seems to be consistently 0 even if the response at index 0 is not an error.
What I think is happening (although I'm not familiar with DataLoader) is that in getLoader we're caching on info.fieldNodes and schema, meaning that if a second delegated request has a different info.path, it still receives the cached loader from an earlier delegation.
Fix in the works
Looking forward to it :) I was planning on giving it a go when I have time but sounds like you might beat me to it.
fwiw I've managed to find a workaround using your idea here and traversing the result in valuesFromResults.
I used an object for the key which includes the previous value as well as its corresponding info object.
-key: source.propertyId,
+key: { value: source.propertyId, info },
We now need to map the keys back to their values when passing args.
function argsFromKeys(keys: readonly any[]) {
- return { ids: keys };
+ return { ids: keys.map(key => key.value) };
}
And map over the results, relocating errors using the corresponding info.
function valuesFromResults(results: any[], keys: readonly any[]) {
- return results;
+ return results.map((result, index) => relocateErrors(result, responsePathAsArray(keys[index].info.path)));
}
However because we're creating a new object for each key, this also causes duplication of args in the delegated request. This may not be an issue depending on the semantics of keys, but I imagine it is probably desirable (and is why I used toBeCalledTimes in the original tests).
Instead we need to generate unique args and map those to results, then map the original keys to relocated results via the arg values. This does require knowing the bidirectional map between args and keys so may be more difficult include in the underlying library.
function argsFromKeys(keys: readonly any[]) {
- return { ids: keys.map(key => key.value) };
+ return { ids: uniq(keys.map(key => key.value)) };
}
function valuesFromResults(results: any[], keys: readonly any[]) {
- return results.map((result, index) => relocateErrors(result, responsePathAsArray(keys[index].info.path)));
+ const args = argsFromKeys(keys);
+ const cache = new Map(args.ids.map((id, index) => [id, results[index]]));
+ return keys.map(key => relocateErrors(cache.get(key.value), responsePathAsArray(key.info.path)));
}
This workaround is entirely in user code so can be used until there's a proper fix, but a proper fix would still be nice :) A full example can be seen here.
the simple fix I was thinking of turned out to be a dead end :(
I am glad you found a workaround!
Have to think more about how to fix within the library, PRs welcome
One problem is these embedded errors can be deeply nested within the result, so to rewrite the paths, we would need to deeply traverse the result.
we can’t simply rewrite the path within the default resolver before returning the error, as non null field errors have a different path, and if we rewrite before returning, we get other bugs....
I think the end solution is to remove the error embedding step from the batch function called by batchDelegateToSchema, and performing that manually after the data loader returns.