graphcool-gateway-apollo-engine-demo
graphcool-gateway-apollo-engine-demo copied to clipboard
This demo demonstrates using Apollo Engine with the Graphcool API Gateway pattern
graphcool-gateway-apollo-engine-demo
This demo demonstrates using Apollo Engine with the Graphcool API Gateway pattern. It contains a simple example with one endpoint, and an advanced example that stitches together two endpoints. It also contains an example that takes advantage of the new Apollo Cache Control.
Simple example
Setup
-
Register on https://www.apollographql.com/engine/
-
Create a new service and note the API Key


- Create a
.envfile in the root of your project folder with the following keys:GRAPHCOOL_ENDPOINTAPOLLO_ENGINE_KEY
GRAPHCOOL_ENDPOINT=https://api.graph.cool/simple/v1/...
APOLLO_ENGINE_KEY=service:xxx:.......
-
Run
yarn installornpm install -
Run
yarn startornpm start

- Open http://localhost:3000/playground and execute some queries

- Go over to the Apollo Engine website to check your metrics


Notes
- Unfortunately,
makeRemoteExecutableSchematurns every query into a single request to the underlying API (our Graphcool API). This means the metrics will not show any useful data about how your query is actually executed by the Graphcool server. It does, however, give you an overall indication of relative performance.
Advanced example
The advanced example combines two different endpoints, one with Posts, and one with Comments. Now, the tracing from Apollo Engine becomes a lot more interesting. I selected two different regions to illustrate the difference between the two endpoints.

- Create a
.envfile in the root of your project folder with the following keys:GRAPHCOOL_POST_ENDPOINTGRAPHCOOL_COMMENT_ENDPOINTAPOLLO_ENGINE_KEY
If you leave out the endpoint keys, it will use two demo endpoints (read-only). If you want to use your own endpoints, use the schemas from the
schemasfolder to set up your endpoints.
- Start with
yarn start:mergedornpm start:merged

Caching example
The caching example takes advantage of the new Apollo Cache Control standard, implemented by Apollo Server, and recognized by Apollo Engine. Based on caching hints delivered by Apollo Server, Apollo Engine applies intelligent caching to the queries.
-
Use the same set up as for the advanced example
-
Execute a query in the Playground. The first time, you will notice an extra result node, called
extensions. This node contains caching hints.

- The second time the query runs, caching is applied by Apollo Engine, and the results are returned immediately. This is reflected in the Apollo Engine report. The first request took 792 ms, the second request 1 ms, thanks to the inmemory local cache from Apollo Engine:
