neo4j-graphql-js
neo4j-graphql-js copied to clipboard
ID field may not be available when resolving fields outside of the Neo4j schema
If we have an example like this:
type Person {
id: String,
otherPropertiesEtc: String,
user: User,
}
type User {
usernameEtc: String,
}
In this example our User
does not exist in the Neo4j graph, but needs to be resolved from another DataSource. User
also has a foreign key relationship to Person
using the person.id
field.
We would solve this by adding a resolver for the user
property of the Person
. However depending on the query we can have an issue. If the GraphQL query does not request the id
of the person then we do not have the person.id
field available to us when resolving the user
property. By this point the cypher query has already run so we can't augment anything to add it in.
As far as I can see there isn't a good time to do this. Assuming a complicated graph we can't just hijack the query for Person
as person could be reached via various traversals. We also can't issue another query to get the Person
with the id as we don't have the id
available to us.
It seems as though this is a side effect of graph databases accessing things via relationships (no need for ids) which is fine until we need to join onto a wider graph that is id based.
I've come a bit stuck on this and would love some advice - maybe I'm missing a trick!
I'm spit-balling here, so bear with me. What if... you moved the logic for querying/resolving the RDBMS data sources (through an exposed REST API or a second-level GraphQL layer, etc.) into a custom-defined procedure on your neo4j instance that could be called using the @cypher
directive on the "external" field. You should still be able to access this
to get your foreign keys that are needed.
type Person {
id: String
otherPropertiesEtc: String
user: User @cypher( statement: "CALL org.neo4j.example.getUser( this ) YIELD o RETURN o" )
}
type User {
usernameEtc: String
}
~~It might not work too well if the external object weren't a terminal node on your graph, but~~ it's at least some kind of workaround that might even work without any code changes except for the procedures you would be responsible for.
Edit: Thinking it through more than the 15 minutes I did before I posted, I think you could already support arbitrary depth/connections between the "external node" and any other node by utilizing appropriately-defined @cypher
directive "pointers". It would not be very efficient to be hanging on multiple calls to an external database if you do this a lot (which would beg the question: "If your data are so connected, why aren't they all in the graph database?"). I think it's pretty cool that you can effectively create synthetic nodes/relationships on your graph with this method, though.
Actually, the APOC library offers several solutions to do this without defining your own procedure. Check out apoc.load.json
or apoc.load.jdbc
for reference. You can store the connection strings to either type of endpoint as an alias in conf/neo4j.conf
if you'd like to obfuscate sensitive information from your GraphQL Schema. If it isn't already supported, it might be worth a feature request in APOC to allow for aliasing of SQL parameters as well.
https://github.com/neo4j-graphql/neo4j-graphql-js/issues/608