ariadne icon indicating copy to clipboard operation
ariadne copied to clipboard

Add support for query batching

Open patrys opened this issue 7 years ago • 16 comments
trafficstars

⚠️ Work in progress!

TODO:

  • [x] Turn WSGI middleware into a generator
  • [x] Support query batching
  • [ ] Add tests

patrys avatar Nov 15 '18 12:11 patrys

Is it something users should be aware of?

  • [ ] Documentation

mirekm avatar Nov 15 '18 13:11 mirekm

Yup, we'll need to document this feature, maybe adding section about query batching to existing wsgi middleware docs.

rafalp avatar Nov 15 '18 13:11 rafalp

Is there any update on this? I searched through the docs and it doesn't look like this was ever implemented, is that right? Without query batching in some way it seems like ariadne suffers from the dreaded n+1 query problem, is that right? Are there any workarounds I missed?

Thanks!

jacobmischka avatar Jun 07 '19 20:06 jacobmischka

Hi @jacobmischka, thanks for interest!

This feature is currently on hold, but we we will eventually return to it and get it implemented.

Without query batching in some way it seems like ariadne suffers from the dreaded n+1 query problem, is that right?

Query batching is not a solution to the N+1 problem. It's solution to the "I want to run few queries/operations in single trip to the server" problem that especially mobile clients have.

Ariadne already supports two most popular solutions to the N+1 problem:

  • you can provide root value resolver that introspects GraphQL query and loads it in optimized way
  • if your python code is asynchronous, you can implement custom dataloader object that returns futures to your resolvers and after some delay (say, 10ms) loads the data to those futures. This strategy is most popular in node.js, but its also viable if you are running on asynchronous stack.

rafalp avatar Jun 07 '19 22:06 rafalp

Ohhh that's right, I forgot about that graphql feature. Perfect, thanks much!

jacobmischka avatar Jun 07 '19 23:06 jacobmischka

You don't need to write custom data-loaders, there's https://github.com/syrusakbary/aiodataloader

patrys avatar Jun 10 '19 08:06 patrys

any update on this? It looks really cool. Is It going to be supported for ASGI as well?

dacevedo12 avatar Sep 17 '20 12:09 dacevedo12

@dacevedo12 remember that QueryBatching is a mechanism used to save time sending multiple queries in single HTTP request. It doesn't affect query executor so it makes no difference if you are using ASGI or WSGI for your server.

rafalp avatar Sep 17 '20 12:09 rafalp

With HTTP/2+ it's not as important as it was with HTTP/1.x so I'm no longer sure whether it's worth finishing this pull request given how complex it makes the query handling logic.

patrys avatar Sep 17 '20 12:09 patrys

https://github.com/syrusakbary/aiodataloader

Is there any usage samples for this with ariadne?

ShadyNawara avatar Oct 25 '20 03:10 ShadyNawara

https://github.com/syrusakbary/aiodataloader

Is there any usage samples for this with ariadne?

found that you generously provided an example here https://spectrum.chat/ariadne/general/make-single-call-to-service-that-provides-data-to-multiple-resolvers~d8ba151d-ceab-4771-b8c5-bbf628fb3eb9?m=MTU2Nzc4NzI3Mjg1Mg==

ShadyNawara avatar Oct 25 '20 05:10 ShadyNawara

@rafalp

Query batching is not a solution to the N+1 problem. It's solution to the "I want to run few queries/operations in single trip to the server" problem that especially mobile clients have.

Isn't query batching required for Apollo Federation batching of keys? so if you are trying to protect against N+1 at the Gateway level, then you need Query Batching + DataLoader to mitigate the N+1 problem created by a Federated system?

nikordaris avatar Apr 09 '21 18:04 nikordaris

@nikordaris

@rafalp

Query batching is not a solution to the N+1 problem. It's solution to the "I want to run few queries/operations in single trip to the server" problem that especially mobile clients have.

Isn't query batching required for Apollo Federation batching of keys? so if you are trying to protect against N+1 at the Gateway level, then you need Query Batching + DataLoader to mitigate the N+1 problem created by a Federated system?

I believe this is correct. @rafalp Are there plans to support this any time soon? I may be able to contribute to this

process0 avatar May 05 '22 17:05 process0

@process0 I think we could have feature flag to enable query batching, but first I want to get #855 merged

rafalp avatar May 12 '22 10:05 rafalp

@rafalp now that #855 is merged, what is required for this?

process0 avatar Aug 29 '22 22:08 process0

@process0 question about federation doesn't seem accurate. Federated servers implement _entities query that receives list of type:id pairs of items it should return to federation gateway, so there's no N+1 problem occurring here.

As for batching executor, I would like to see a PR that implements this as batching HTTP handler replacing default one.

rafalp avatar Aug 30 '22 17:08 rafalp

@rafalp I'm currently trying to implement this approach of using the list of type:id pairs that comes from the federation gateway. Do you have any resources/examples on how can I get these values from inside the resolvers? I'm using flask + ariadne and going async is not an option :(

I've tried this using a dataloader with a WSGI, but it didn't work. When the resolver loads the entity from the instantiated data loader, it returns a Future instance instead of the instance in memory.

feslima avatar Feb 02 '23 14:02 feslima

Hello Felipe, Good news is next week I would like to release the Ariadne 0.18 which will include the dataloaders support for synchronous servers. :)

rafalp avatar Feb 02 '23 14:02 rafalp

@rafalp Thank you very much for the response and good news! I'll be waiting for it.

feslima avatar Feb 02 '23 16:02 feslima

@rafalp excited for this new release too! waiting for this dataloaders support for synchronous servers

anabeatrizgn avatar Feb 13 '23 13:02 anabeatrizgn

@rafalp Waiting for this new release! That is a good new!

kyronsatt avatar Feb 13 '23 13:02 kyronsatt

That's great news, @rafalp. Looking forward to the release of this new version

leomontenegro2104 avatar Feb 13 '23 13:02 leomontenegro2104

I don't know why this discussion is happening in query batching PR 😅

Anyway, I'm sorry but there will be delays with next release, biggest blocker is finishing docs for 0.18, but dataloaders documentation is already there, so I guess we'll have 0.18.0 beta release before proper final release?

rafalp avatar Feb 13 '23 14:02 rafalp

I've marked dataloaders dicussion as offtopic.

rafalp avatar Feb 13 '23 14:02 rafalp