graphql-js icon indicating copy to clipboard operation
graphql-js copied to clipboard

Resolvers keep running even after one of them throws/rejects - consider Promise.allSettled?

Open olsavmic opened this issue 4 years ago • 9 comments

Hi, I recently came across this issue while trying to create a DB transaction per request.

The reason for using a DB transaction per request (which means 1 connection per client) is to keep the returned data consistent (ISOLATION READ COMMITED) as with increased traffic some inconsistencies started to occur on certain queries.

The second reason is having more control over the db connections to support load balancing with dynamic number of read replicas (as the only available solution unfortunately does not support connection pool). However I suppose this can be solved with some effort.

The problem is that when one of the resolvers fails with error (which is not a casual case but happens sometimes), other resolvers keep running. However I need to release the db connection before I send the response which sometimes happens before the rest of resolvers finishes (most of them are data loaders which makes the problem more obvious as the default behaviour is to wait for the next tick until the batch operation runs).

The problem could be resolved by using Promise.allSettled instead of Promise.all. I can't come up with any problem this change could cause except longer time till response in case of error.

Can you please correct me if I'm mistaken or consider this change? Or maybe transactional processing of resolvers is a bad idea in general (although I don't see any reason except the performance gain from using multiple connections per request)

Thanks a lot!

olsavmic avatar Mar 18 '21 20:03 olsavmic

@olsavmic I think it's a more general issue than DB transactions since it will affect any resources with reference counting. Also, it can potentially DDoS problem, if you can find a request that fails fast but provokes a lot of memory allocation referenced by long-running resolvers you can run out of memory on the server.

But we need to be extra careful with execute code to not affect performance or correctness. So let's postpone this fix until we finish TS conversion and release it in the upcoming 16.0.0.

IvanGoncharov avatar Apr 04 '21 19:04 IvanGoncharov

Good point, thank you!

olsavmic avatar Apr 06 '21 09:04 olsavmic

Hi @IvanGoncharov, I see this issue in the 16.0.0-alpha.1 milestone which has already been released. Is it still planned for the v16 or did you come over to some issues with this change?

Thanks a lot! :)

olsavmic avatar Aug 08 '21 14:08 olsavmic

This feature is useful when the tasks are not dependent on each other.

hamzahamidi avatar Sep 21 '21 16:09 hamzahamidi

@hamzahamidi Can you elaborate on this? I can't think of such situation as I'm not saying to run these resolvers in sequence but rather just way for all resolvers to finish before sending a response which allows for proper resource cleanup and prevents possible DDoS as @IvanGoncharov stated.

olsavmic avatar Sep 22 '21 05:09 olsavmic

@IvanGoncharov can we consider this for v16?

yaacovCR avatar Oct 10 '21 19:10 yaacovCR

Released in graphql executor 0.0.7 see above links

yaacovCR avatar Oct 22 '21 12:10 yaacovCR

We ran into this issue and it was causing problems with our dependency injection as we didn't expect the behaviour of graphql to be continue processing after returning an error result.

charsleysa avatar Feb 24 '22 23:02 charsleysa