caluma
caluma copied to clipboard
Use graphene dataloader
I have heard from different sources now that the way graphene is implemented at the moment it can lead to N+1 database requests.
I had a look and it appears that graphene supports dataloader which allows batching of database calls, see here: https://docs.graphene-python.org/en/latest/execution/dataloader/.
Perhaps this is something that can be used in caluma as well.
We've looked into this, but unfortunately, due to us using the relay
pattern, it's not possible.
Every node, even of the same type, is separately filterable, and thus must use it's own queryset.
We'd need to divert completely from the relay pattern to allow for this optimisation.
Potentially, we could also provide "fast-path" access to often-used data such as forms that don't allow filtering on every level: If fetching a form, we usually want all the questions within it, so no filtering on the "questions" node required