graphql-go
graphql-go copied to clipboard
RFC: New way to declare resolvers
I've just been to the GraphQL Europe conference and it got me thinking about how we specify resolvers. Using Go methods seemed nice at first, but is that really the best way? Please take a look at this alternative: https://github.com/neelance/graphql-go/blob/new-resolvers/example/starwars/starwars.go#L291
Advantages:
- no resolver wrappers, especially good for list results
- API easier to learn
- clean shortcuts like
ResolverField
- most type checking at startup is still possible
- explicit name mapping between GraphQL and Go
- automatic handling of GraphQL type assertions
Disadvanatages:
- no type checking on return value if type is a GraphQL interface (they map to Go's
interface{}
)
What do you think?
/cc @F21 @nicksrandall @acornejo @bsr203 @EmperorEarth
looks much simpler and easier for new users and looks great. I use relay "Node" interface a lot and not sure how much headache would it be not having validation. But, I am ok with the tradeoff considering the overall simplification. Thanks for your continuing effort on improving the library. Cheers.
This looks like a good way to lower the entry to barrier.
Would the existing way to define resolvers still be supported?
It could be that I'm used to the current method of writing resolvers, but I have a few concerns about doing away with the existing method and going only with this approach:
- would force the creation of API-specific models if the underlying data does not match the GraphQL schema.
- would make it harder to unit test, unless the functions are named.
- function names would have to be more verbose (maybe include the name of the object being resolved?) because we lose the ability to attach functions to types
- it becomes a bit unclear how to organize graphql-related code inside your project (what if I want the registration to be separate from the function definitions?)
- we lose some type checking
- ResolverField is convenient for simple use cases but in advanced use cases (i.e. lazy-loading of fields) maybe not so much
However, the only drawback I can think of if this method of specifying resolvers lives alongside the existing is that multiple ways to do the same things can be confusing.
Would the existing way to define resolvers still be supported?
No, but you can emulate it via https://golang.org/ref/spec#Method_expressions.
- would force the creation of API-specific models if the underlying data does not match the GraphQL schema.
What do you mean by that?
- would make it harder to unit test, unless the functions are named.
- function names would have to be more verbose (maybe include the name of the object being resolved?) because we lose the ability to attach functions to types
I think you can still have them attached to a type by using https://golang.org/ref/spec#Method_expressions.
- it becomes a bit unclear how to organize graphql-related code inside your project (what if I want the registration to be separate from the function definitions?)
Please also explain some more.
I like it!
However, I feel that the downside is that if there is a missing resolver, it might not be caught.
For example, if I just have a schema, and have no resolvers, will the missing resolver only be caught during runtime?
s := graphql.ParseSchema(schemaIDL)
// no resolvers here.
Missing resolvers will throw an error at application startup, just like they do right now.
No, but you can emulate it via https://golang.org/ref/spec#Method_expressions.
Okay, that's perfect! I didn't realize methods could be used like this. This would remain largely the same as existing in this case.
would force the creation of API-specific models if the underlying data does not match the GraphQL schema
From the example given, it looks like the functions return the actual type rather than a resolver.
Here's an example
// This type seems to be specific to the API, used instead of resolver wrapper
type starship struct {
ID graphql.ID
Name string
Length float64
}
// Imagine this is the type contained by starshipData
type legacyStarship struct {
ID int
BrandName string
Model string
Length float32
}
s.Resolver("Query", "starship", func(r *Resolver, args *struct{ ID graphql.ID }) *starship {
legacy := starshipData[args.ID]
return &starship{
ID: graphql.ID(strconv.Itoa(legacy.ID)), // previously each of these would have a function that contains this logic
Name: strings.Join(" ", legacy.BrandName, legacy.Model),
Length: float64(legacy.Length),
}
})
Is it the case that you can no longer return a resolver type? If so, do we lose the ability to lazy-load fields?
it becomes a bit unclear how to organize graphql-related code inside your project (what if I want the registration to be separate from the function definitions?)
This is a moot point since I can just use method expressions, like you mentioned.
Is it the case that you can no longer return a resolver type?
Sure you can. Why shouldn't it be possible?
Ah I was confused... This seems great! I don't have any more concerns. Out of curiosity are there any performance implications to using this method?
I did not do any performance tests yet. However, ResolverField
might be faster because it does no call.
It would be nice if you could define multiple field resolvers together to make it easier to keep track of the fields defined for your resolver
// graphql.FieldMap = map[string]string
s.ResolveFields("Droid", graphql.FieldMap{
"id": "ID",
"name": "Name",
"appearsIn": "AppearsIn",
"primaryFunction": "PrimaryFunction",
})
or use struct tags
type droid struct {
ID graphql.ID `gql:"id"`
Name string `gql:"name"`
Friends []graphql.ID
AppearsIn []string `gql:"appersIn"`
PrimaryFunction string `gql:"primaryFunction"`
}
If the struct tag route is up for discussion it wold be nice to just reuse json tags too if possible
Thanks for all the great work!
How do you plan to thread context?
@euforic I agree that such helpers may be nice. They should be easy to add on top of the simpler base API.
@nathanborror Sorry, but I don't understand your question.
I much, much, prefer the struct-based approach.
From your list of advantages:
- API easier to learn
I don't think it's easier. The API currently is essentially 2 functions (from the examples):
graphql.ParseSchema(starwars.Schema, &starwars.Resolver{})
http.Handle("/query", &relay.Handler{Schema: schema})
And graphql.ID
for resolvers That's about it.
In fact the API is very clean, and doesn't "overreach" in the app code. One way to see this is that the only dependency on graphql for the big chunk of the example code, is on graphql.ID
. And the server implementation only calls graphql.ParseSchema
and relay.Handler
. That's it.
- clean shortcuts like ResolverField
No shortcuts is cleaner. But I agree that for simple fields, maybe some json-like annotations like it was suggested in this issue and another one, could make things a tiny bit easier. That's also cleaner than ResolverField I think.
You suggested that method expressions can be used. Yes, but instead of just adding a method to a struct (which I need anyway), now I also have to add one line of code, calling ResolverField with string arguments.
To quote @neelance on a related discussion ;)
I think that the following happened a lot during the design of the Go language:
Is it possible? Yes.
Would it give the option to write less code? Yes.
Should we add it? Probably not.
(Which made me laugh, because I think it's probably true :) )
And I think this that ResolverField one of those too. You save 2 lines of code with the shortcut, but it's not cleaner than the function which gives you more flexibility.
- most type checking at startup is still possible
"most" is not as good, so this is a step backwards (and a big one in my opinion)
- explicit name mapping between GraphQL and Go
- automatic handling of GraphQL type assertions
I either don't really understand what you mean, or don't see the difference/improvement with how things are now. Passing strings as function arguments don't map any better than function names (ok, they match the case). In fact to me those strings are at least as magical as method names. If we were using something like ResolveField(schema.Human, schema.Friends, &Human.Friends) maybe, but again, this is just repeating what the Human.Friends struct/function is saying currently.
The other thing is that code organization is a bit more messy. The initSchema function is huge, and must have a direct dependency on every piece of the implementations, all the resolvers, all the fields. Of course I could pass around s
, but it's still not as clean.
Ultimately, there is a much tighter link between the app code and the graphql-go library. Testing is not as easy (before, my tests would pretty much not know about graphql-go, just import my structs and my resolvers, and use that directly). Testing was mentioned before by @tonyghita, but another thing to add is that now testing has to include this initSchema function, and the implementation of my resolvers. Before it was only the resolvers.
But losing some type safety is sort of the biggest issue I have. Actually, I think it's already a pretty big problem that some of that checking is done at startup time, and not build-time. It is a tradeoff to make the API simpler (pass the schema as a string, and not an object that you have to build). Ideally it should be possible to build the schema only from the top level resolver struct (api would look like graphql.BuildSchema(&starwars.Resolver{})
as long as it doesn't require some sort of schemaBuilder.Object
like here)
Obviously all of these arguments can become moot with a single helper function that does reflection, and makes all the s.Resolver() and s.ResolverField() calls for me (which is roughly what is happening currently)
@yohcop Thanks a lot for your feedback. Some thoughts:
About the type safety: It is really only about GraphQL interfaces. I also wasn't fully happy about how they worked before, it was more complicated than it needs to be.
About ResolverField
: This is not only a shortcut, but performance relevant. Calling a function is much more expensive than reading a field. ResolverField
will not just be a wrapper around Resolver
.
About your "single helper function that does reflection": That won't work easily. graphql-go currently traverses the schema and the Go types in parallel to figure out mappings between GraphQL types and Go types. This is quite implicit. With the new API this will be much clearer to understand.
I have started working on the new approach and the graphql-go implementation already became a bit less complicated, which I think is good. It will also make it easier to add some features requested in other issues. Please take a look at how the new API currently looks like, I have iterated on it a bit: https://github.com/neelance/graphql-go/blob/new-resolvers/example/starwars/starwars.go#L289
I hear your concerns, but right now this still looks like the right way to go.
Fair enough :) Actually I would take performance improvements over almost anything. I would be wary of promising improvements before benchmarks though (unless you have some already?).
Ultimately it's also your library, and you know it better than anyone.
Looking at the last iteration, I think I can live with it. It is a little complex in a way, here are a few questions:
- what is the root struct? What's the purpose of it? I see it mirrors the more concrete types
human
,droid
, etc. Is it just to fit the API? - I see the
Schema()
function returnsb.Build(&root{})
. Ifroot
is not really necessary, could this be replaced withb.Build()
(or passed a dummy root object) and instead&root{}
would be build at query time, in the http handler. It could then be used to pass information about the query through resolvers for Queries and Mutations (e.g. logged in user info). Currently one has to use theContext.WithValue
/Context.Value
in a non-typesafe way, but a user-defined struct would be much nicer. Another mechanism would be welcome too, if root doesn't work here.
I'll keep following, keep us updated :+1:
Moving from https://github.com/neelance/graphql-go/issues/15#issuecomment-311453283
When do you think that this'll be done with?
Also, can you use tags/releases so we can use a dependency management tool to manage the versioning as you develop more and change the API?
@neelance what's the decision on this? Is the new way final and just blocked on you finishing the work?
I like the approach it definitively would make sense to go in that direction. It would take a good chunk of my boiler-plate code away
Also thanks @neelance for https://golang.org/ref/spec#Method_expressions
I found time to work on graphql-go
today and made good progress on the new resolvers. My plan is to push a version tomorrow that is not fully finished, but ready for testing and feedback.
If you have a new release I'll test it out this week. Is there going to be a release tag or other versioning system?
On Jul 3, 2017 8:58 PM, "Richard Musiol" [email protected] wrote:
I found time to work on graphql-go today and made good progress on the new resolvers. My plan is to push a version tomorrow that is not fully finished, but ready for testing and feedback.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/neelance/graphql-go/issues/88#issuecomment-312756479, or mute the thread https://github.com/notifications/unsubscribe-auth/AA8asgmyzigYoXOaww9OKsfNkXlkmkACks5sKY49gaJpZM4NjiUs .
All right, the new version on branch new-resolvers
passes all tests. Feel free to give it a try. It still needs better error handling in a lot of places and code cleanup. I'm also not really happy about the top-level value (the argument passed to Build
). One option would be to remove it entirely. Ideas on this and other feedback are welcome.
@arnos I'm still reluctant to put a version label on it, since the API does not yet feel stabilized to me. I'm still exploring the GraphQL world.
That tells me that this isn't really production ready :/
@ijsnow Depends on your definition of "production ready": If you mean "use it and expect bug fixes without occasionally having to invest work to adopt some API changes" then no, it is not this "production ready". If you mean "it does not have bugs all over the place", then yes, it is this "production ready".
I just tested the new version.
It seems to weird in some places compared to the old style: b.Resolvers("Query", (*root)(nil), map[string]interface{}{
But not having a func
for every field which is just a getter is a lot better! :+1:
@metalmatze Thanks for giving it a try. Any ideas on how I could improve the new API? It also feels a bit weird to me, but maybe we're just not used to it.
@neelance tried it out some observations:
- It works well and with Method expression you linked to earlier minimal changes are required
SB.Resolvers(
"Queries", (*Resolver)(nil), map[string]interface{}{
"flavor": (*Resolver).Flavor,
"flavors": (*Resolver).Flavors,
},
)
- I like being able to use fields directly, (I have to keep custom wrapper function to convert time.Time to graphql.Time)
SB.Resolvers(
"Flavor", (*FlavorResolver)(nil), map[string]interface{}{
"id": "Id",
"name": "Name" ,
"createdAt": (*FlavorResolver).CreatedAt,
"updatedAt": (*FlavorResolver).UpdatedAt,
},
)
- Using the clean architecture my models reside in a Domain package separate from the GraphQL package (I end up duplicating more code ... but that's a function of my design not the library) as a result the only code that goes away are the simple field resolvers.
In terms of changes
- a dead simple change is to extract the map, resolver type and not have it in the Resolvers method, this will make the API cleaner (IMHO) compare my query resolver example above with the following
queryMap := map[string]interface{}{
"flavor": (*root).Flavor,
"flavors": (*root).Flavors,
}
rootResolver := (*root)(nil)
SB.Resolvers("Queries", rootResolver, queryMap,)
SB.Resolvers("Flavor", flavorResolver, flavorMap)
Overall I like the change it brings some duplication for me with the clean architecture
If you did make a version tag for the old API before you merge to master, it'd be really helpful for those of us using Glide and the like. We'd have better control of when we port our code to this new way of implementing resolvers.
Also what about trying to get complete feature support before changing the API? Like subscriptions
@neelance Do you think that the package is likely to go with this new way?