graphql-erlang icon indicating copy to clipboard operation
graphql-erlang copied to clipboard

add multi endpoint support [wip]

Open goertzenator opened this issue 6 years ago • 2 comments

I wanted to see how far I could get with this, and I managed to steamroll through the whole thing without getting stuck. It passes existing tests, but still needs new tests to hit the new APIs better. Also, the new ep_ and p_ APIs could use a review.

  • Multiple endpoints may now run independently each with their own schema process.
  • Global context has been replaced with endpoint_context type containing schema PID and ets tables.
  • New API in graphql with ep_ prefix (ex ep_execute()). These take an endpoint_context.
  • New API in graphql with p_ prefix (ex p_execute()). These take a schema PID/atom. This is most useful when you have a named schema process that can be referred to by its name atom.
  • Existing API wraps the ep_ API and uses the named schema process graphql_default_endpoint
  • endpoint_context is passed inside Ctx maps. The variable EP is added when endpoint_context is needed but no Ctx is available.
  • ct tests pass
  • addresses issue #91

goertzenator avatar Apr 13 '18 19:04 goertzenator

I'm liking this API less and less the more I think about it. I am pondering the things said in case #91, but I'm left with a big question: should there be a single high level function that does all the work in the cowboy handler? All the work in the cowboy handler I basically see repeated verbatim in test helper th:x(), and the cowboy handler I use for my work is just a copy of the example cowboy handler. I think if this common use case can be compressed into a single function then the design of the underlying API functions becomes less important and less fragile to future change.

goertzenator avatar Apr 13 '18 21:04 goertzenator

I think we should aim to simplify the graphql:Fun/N interface. I originally split it because I had the idea you could stage the computation, but

  • Must implementations don't and use dynamic queries since it is fast enough in practice
  • If we need phasing, well ..., then a high level interface seems like the right thing to do anyway.

I suggest we work on nailing the high-level interface, keeping the contextual execution in mind. The patch seems correct to me, but indeed, we should probably nail the interface first.

I'm currently focused on getting us to be compliant with the Jun2018 specification and things seem to have changed in lots of small places. There are also some refactoring rounds which is sorely needed on the internals, so expect me to change stuff around a lot.

We can try to get your changes in between all the other stuff so it is managed nicely. So unless you want to rewrite the core, I think we should just merge it and then fix stuff as we go along (and add tests later).

jlouis avatar Jun 25 '18 12:06 jlouis