Hasura metadata API slows down as amount of tables increases
I have a multi-tenant Hasura application, and the tracking performance degrades with every increase in the number of the tables tracked in the database.
Is this intended behavior? How can I help my application perform tracking updates when there is a small schema change such as a new table in any given schema, so that it doesn't take 30 seconds to track the new table if I have 2000 other tables in the db?
Version Information
Server Version: v2.33.4 CLI Version (for CLI related issue):
Environment
Self hosted graphql container running in Docker PostgreSQL 14.5 on x86_64-pc-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
What is the current behaviour?
Tracking tables/creating object/array relations takes an increasing amount of time as the amount of tables in the DB increases.
What is the expected behaviour?
The time it takes to track a single table should not increase proportionally to the amount of tables contained in the database.
How to reproduce the issue?
- Fill up the database with more, tracked, tables; as the amount of tables increases, the time it takes to query the API increases as well.
Example using a python function that calls the Hasura metadata api using the requests library, and using the pg_track_tables tracks a single table after it has been created and added to the database:
# 0 tables in the DB
func: track_tables took: 0.53 sec
...
# +1700 tables in the DB (even in different schemas, amount of schemas doesn't matter)
func: track_tables took: 9.85 sec
Keywords
Slow, track, tables, timeout
+1 It's incredibly slow...
+1 ultra increadibly slow..
+1 It is getting slow everytime I create new table.
+1 it's still very very slow