pgsync icon indicating copy to clipboard operation
pgsync copied to clipboard

Keep alive db connection fallback

Open Eklavaya opened this issue 3 years ago • 3 comments

PGSync version: latest (installed from PIP)

Postgres version: 14.1

Elasticsearch version: 6.8.6

Redis version: 5.0.8

Python version: 3.7

Problem Description: here I am able to see when we deploy this application on k8, after some time it closes the connection with the database because of an idle connection, which breaks the code when the application poll from connection to listen to notification events. I see probably some solution you provided in the docker file by adding these properties

- net.ipv4.tcp_keepalive_time=200 - net.ipv4.tcp_keepalive_intvl=200 - net.ipv4.tcp_keepalive_probes=5

can we add some fallback in code to avoid this error or keep the connection as Hikari does?

code reference: pgsync/sync.py:968 (latest code)

try: conn.poll() except OperationalError as e: logger.fatal(f"OperationalError: {e}") os._exit(-1)

Error Message (if any):

CRITICAL:pgsync.sync: OperationalError: could not receive data from server: Connection timed out

Eklavaya avatar Jan 20 '22 06:01 Eklavaya

@Eklavaya Can you please change this line and see if the re-connection is handled?

from return sa.create_engine(url, echo=True, connect_args=connect_args) to return sa.create_engine(url, echo=True, pool_pre_ping=True, connect_args=connect_args)

toluaina avatar Jan 20 '22 20:01 toluaina

I have tried this approach as well ... but it is still restarting because of the same reason.

Eklavaya avatar Jan 22 '22 04:01 Eklavaya

Thinking about this again. I don't think this can be handled reliably by the application itself.

Perhaps a connection pooler like pgbouncer or server-side settings would be the best way to handle disconnections.

Alternatively if you are running in Docker you can set the parameters above.

toluaina avatar Jan 25 '22 09:01 toluaina