graphback icon indicating copy to clipboard operation
graphback copied to clipboard

Custom Database Column => GraphQL Field mapping solution

Open craicoverflow opened this issue 5 years ago • 10 comments

Overview

Graphback gives capability to customise database column name during database generation with the @db.name annotation. See Changing column names in graphback.

Graphback does not offer a simple way to map between a custom database column name and its GraphQL type at the resolver level, i.e. when performing queries or mutations. Currently the only way to do this is to manually map fields in each resolver like this:

// In your data provider
data.title = data['note_title'];

return data;

Which would be overwritten in any of the generated resolver functions during the next graphql generate. This also means that column customisation is completely unsupported in runtime applications.

We need a way to map between customised tables and the GraphQL schema.

Benefits

  • Users would be able to use an existing database with Graphback by manually adding the annotations on their data model fields.
  • Greater customisation support.

Implementation

Option 1

Generate a mapping JSON object which can be consumed by the application at startup (by attaching to context) which can map between fields.

Option 2

Use annotations from generated schema to build mapping and attach time runtime context.

Eg:

const context = createKnexRuntimeContext(db, pubSub, generatedSchema)

export const createKnexRuntimeContext = (db: Knex, pubSub: PubSubEngine): GraphbackRuntimeContext => {
  const crudDb = new PgKnexDBDataProvider(db);
  const crudService = new CRUDService(crudDb, pubSub);

 // in real implementation this mapping would be generated from `generatedSchema`.
 const fieldMapping = {
   "user": {
     "id" => "id",
     "name" => "user_name",
     "age" => "age"
    }
  }

  return {
    crudService,
    crudDb,
    pubSub,
    fieldMapping
  };
}

This would then be passed to resolver which can transform the mapping dynamically to perform queries and mutations.

The benefit of this approach is that the generated schema is used as single source of truth. Having a mapping file could become outdate or the user would be able to modify it, breaking the mappings.

craicoverflow avatar Jan 16 '20 12:01 craicoverflow

From my point option nr 2 will be no brainer in the new architecture we have proposed. Generating json schema will bring some additional challenges and complexity.

wtrocki avatar Jan 16 '20 12:01 wtrocki

Just to make sure, will custom schema directives be able to be used still? I want to be able to mark up my schema, even manually, to describe custom columns or table names or potentially even column types. Is this issue just talking about the under-the-hood implementation in graphback? I'm hoping I as a developer wouldn't need to do any of this type of imperative code.

lastmjs avatar Jan 16 '20 16:01 lastmjs

Yes. All will be possible by annotations only on schema. More info here: https://graphback.dev/docs/database-schema-migrations#changing-column-names-in-graphback

When we apply this documentation will be deprecated.

wtrocki avatar Jan 16 '20 17:01 wtrocki

Example Schema

"""
@db.name: 'user_account'
"""
type User {
  id: ID!
  """
  @db.name: 'user_name'
  """
  name: String
  age: Int
}

Knex Level

  • Keeps database specific field mapping logic at database layer.
  • Mapping definition is generated in-memory on server start-up so mapping data stays up to date with generated schema (source of truth).

Example Knex Layer Implementation

public async create(name: string, data: Type) {
  const dbResult = await this.db(this.tableMapper.toTable(name).name).insert(this.tableMapper.toTable(name, data)).returning('*');

  if (dbResult && dbResult[0]) {
    return this.tableMapper.fromTable(name, dbResult[0]);      
  }

  throw new NoDataError(`Cannot create ${name}`);
}

Resolver Level

  1. Field resolvers are generated at code generation time. Custom resolvers will not stay up to date.

We could have a plugin under new architecture which adds custom field resolvers which map to latest fields in generated DSL. I can investigate but I believe CRUD level mapping implementation is the better option since field resolvers don't map input types fields anyway so we would have to implement some mapping in both places.

  1. Elegant way to map return fields without having to create new object.

  2. Works for transforming from table columns to GraphQL fields in queries but not the other way around in mutations so some form of mapping will also need to happen at CRUD layer for mutations anyway.

  3. Worse performance than directly at database level. @wtrocki any source for this?

Example Resolver Implementation:

User: {
    id: ({ id }) => id,
    name: ({ user_name }) => user_name, // mapping database column "user_name" to GraphQL field "name"
    age: ({ age }) => age
  },
  Query: {
    findAllUsers: (_, args, context) => {
      validateRuntimeContext(context)
      return context.crudService.findAll("user_account");
    }
  },

Runtime Field Resolvers

A third option is to generate a customised runtime schema which has up-to-date field resolvers mapping to the latest columns in the database. This is a risky approach, this would have to be done at runtime (in the server application code). Currently spiking this approach to see how feasible it is.

craicoverflow avatar Jan 20 '20 16:01 craicoverflow

I terms of performance implementation goes thru the fields anyway: https://github.com/graphql/graphql-js/blob/master/src/execution/execute.js#L397-L418

So there is no difference if we mapping field or using the default resolver

wtrocki avatar Jan 20 '20 17:01 wtrocki

We have decided to go with a Knex Level Implementation.

  • Builds a mapping definition on server start, keeping the mapping up-to-date with the latest generated schema.
  • Can map both directions (mutations and queries).
  • Implementations can be swapped with specific table mapping interface.
  • Keeps database-level knowledge at database layer.

craicoverflow avatar Jan 22 '20 10:01 craicoverflow

To clarify. Would we use this mapping on knex level or in crud methods?

wtrocki avatar Jan 22 '20 10:01 wtrocki

To clarify. Would we use this mapping on knex level or in crud methods?

Knex level.

craicoverflow avatar Jan 22 '20 10:01 craicoverflow

I'm going to take this one now - it's a crucial part of making Graphback work with existing databases.

craicoverflow avatar Mar 23 '20 11:03 craicoverflow

On second thought I am going to hold off.

https://github.com/aerogear/graphback/pull/916 will mean that how we map will change anyway so it would have to be rewritten.

craicoverflow avatar Mar 23 '20 11:03 craicoverflow