graphql-schema-registry
graphql-schema-registry copied to clipboard
Schema breakdown migration
Problem
In order to develop the new break down schema feature, we will need some new tables on the MySql database.
Changes
New migration adding 7 new tables
- type_def_types
- type_def_fields
- type_def_implementations
- type_def_field_arguments
- type_def_operations
- type_def_operation_parameters
- type_def_subgraphs
Testing
No need
Please add small PRs with functionality, not big-bang multi-phase ones to follow YAGNI. I can't forsee what DB tables are or aren't going to be used.
For example, def_operations
are not needed until you add some kind of query analytics.
def_types
could be added, but it needs to have a UI or API impact along with data migration (parse existing type_defs and fill the new table)
Please add small PRs with functionality, not big-bang multi-phase ones to follow YAGNI. I can't for see what DB tables are or aren't going to be used. For example, def_operations are not needed until you add some kind of query analytics. def_types could be added, but it needs to have a UI or API impact along with data migration (parse existing type_defs and fill the new table)
We have the code working to do the breakdown in another branch, but it's not quite small. If you want we can do the PR for this code, so you will be able to see the break down strategy and the breaking change strategy filling this tables.
@oscarSeGa I want to be agile and have small PRs that add functionality as well.. currently its a big-bang that I'm afraid of
Also, I wonder how much its going to impact the performance.. if I post big schema, how will you fill all of the tables? How much time will it take, how many requests.. what happens with the endpoint if there is an error in that part of the code (that doesn't seem critical)?
Because we're trying to remove validation from read endpoints.. and would be nice to keep amount of inserts at minimum https://github.com/pipedrive/graphql-schema-registry/pull/129
Also, I wonder how much its going to impact the performance.
Right now the push endpoint it's around 70-90 ms , adding the breakdown and the braking change functionality it increasing up to 105-120 ms.
how will you fill all of the tables? How much time will it take, how many requests.
For the WORST case scenario, the number of inserts+selects it's around 30 database calls. The way we did the code, we don't mind if the schema has 30 new queries or just 1, we are inserting them in batches (same for selections), so if we have a huge schema, everything it's grouped so we avoid doing a lot of calls.
what happens with the endpoint if there is an error in that part of the code (that doesn't seem critical)?
Everything is working with the same transaction, if something fails, it's just rollbacked.
One clarification: This part of code it's only being called on the push endpoint, it's not used when someone is consuming a query, so the amount of calls it's infinite less than the usage feature
pls open up-to-date PR with DB migration, code and view changes all in one. If you're afraid of making it too big, try to add bare minimum first (agile approach) and add more and more features on top of it (ex. only schema types first.. and add others later)