graphql-ruby
graphql-ruby copied to clipboard
Lookahead#selects? performance
Is your feature request related to a problem? Please describe.
I've got a field definition with extras: [:lookahead]
that does something like this:
do_extra_thing = llookahead&.selections&.any? do |sel|
[:field_1, :field_2, :field_3].any? { |field| sel.selects?(field) }
end || false
It turns out that for a fairly large production GraphQL request, a significant chunk of object allocations (5% of total) happens in normalize_name
:
https://github.com/rmosolgo/graphql-ruby/blob/9cc79a95b66b9719815fd41801e227b20944e02e/lib/graphql/execution/lookahead.rb#L200-L206
Every time selects?
is called with a symbol, even if it's a symbol that's been seen before, it has to be camelized.
Describe the solution you'd like
It'd be nice if we could somehow avoid these object allocations. Perhaps the camelization could be memoized (but I'm not sure how that would work since the Lookahead
object is new each time so I don't think it makes sense to memoize there) or we could have a one-time mapping at the query level from the GraphQL version of the field name to the Ruby version ... or do this in schema generation or something ...
I don't have a particular solution that I know would work, but I'm hoping there might be something simple to prevent fields from being re-camelized.
Thanks for the detailed report, and sorry for the trouble! I have to admit I haven't considered the performance of Lookaheads closely... and perhaps it shows. I'm sure we can improve it. I'll take a look soon and follow up here.
(Of course, in the meantime, you could switch those symbols to strings to work around this issue!)
#4189 might help, depending on how the fields are defined (it should help if the fields' names are field :field_1 ...
, etc, in the schema definition). I'll release it in 2.0.14 shortly.
Were you able to give this a try on 2.0.14? Please let me know what you find if so.
@rmosolgo I have a note to check on this after we're able to upgrade to 2.0.14. Do you want to close this for now? The fix looks promising.
Sure, thanks again for the detailed report. Please let me know if you run into any more trouble with it!