Tolu Aina

Results 304 comments of Tolu Aina

Sorry for the slow response again. Can you confirm how much space was available before. My guess is the large joins are resulting in temp tables being created. Have you...

Can you re run the application in verbose i.e `pgsync -c schema.json -v` This should log the actualy query being run and then you can run that Query against your...

You can also send me the output removing any sensitive data and I can extract the Query you need to run

I am guessing this is the initial sync.

Sorry about this. Here is the resulting query ``` SELECT anon_1."JSON_BUILD_ARRAY_1", anon_1."JSON_BUILD_OBJECT_1", anon_1.id FROM (SELECT JSON_BUILD_ARRAY(anon_2._keys) AS "JSON_BUILD_ARRAY_1", JSON_BUILD_OBJECT('id', transactions_1.id, 'status', transactions_1.status, 'external_transactions', anon_2.external_transactions) AS "JSON_BUILD_OBJECT_1", transactions_1.id AS id FROM...

Can you also run an **_EXPLAIN_** on this query?

I think you need to increase the `work_mem` and restart the database server

Is the large join the bottleneck here? Does that query run in a psql shell? I can't think of an easy way around the large joins. I am working on...

@Eklavaya Can you please change [this line](https://github.com/toluaina/pgsync/blob/master/pgsync/base.py#L941) and see if the re-connection is handled? from `return sa.create_engine(url, echo=True, connect_args=connect_args)` to `return sa.create_engine(url, echo=True, pool_pre_ping=True, connect_args=connect_args)`

Thinking about this again. I don't think this can be handled reliably by the application itself. Perhaps a connection pooler like [pgbouncer](https://www.pgbouncer.org/) or server-side [settings](https://www.postgresql.org/docs/13/runtime-config-connection.html) would be the best way...