pgsync
pgsync copied to clipboard
All replication slots are in use
PGSync version: 2.1.9
Postgres version: 13
Error Message (if any):
cursor.execute(statement, parameters)
psycopg2.errors.ConfigurationLimitExceeded: all replication slots are in use
HINT: Free one or increase max_replication_slots.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/pgsync/base.py", line 857, in fetchone
row = conn.execute(statement).fetchone()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1286, in execute
ret = self._execute_context(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1842, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2023, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1799, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 717, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (psycopg2.errors.ConfigurationLimitExceeded) all replication slots are in use
HINT: Free one or increase max_replication_slots.
You will need to increment max_replication_slots
as described here
To add to the above; if your schema name changed, pgsync will attempt to create a new replication slot with the new name whilst the old slot still exists.
If you only need the new replication slot (and you're okay with starting a new sync from scratch), drop the old one with:
-- list the current slots
select * from pg_replication_slots;
-- drop the old slot
select pg_drop_replication_slot('your_slot_name');
Importantly, when you drop the replication slot, you will lose your checkpoint position and you'll need to re-sync the entire db with pgsync.
Thanks @voyc-jean This is correct
thanks for to quick reply @toluaina @voyc-jean
I'm using a GCP managed Postgress instance on which I have multiple DB, like devdb, stagedb, qadb etc, and my current max_replication_slots
is set to a maximum limit of 32, here In this scenario to increase max_replication_slots
from 32 to a higher number it will require an upgrade on Postgres instance which cost to the company. which might be a case we will not go for increasing slot value. is there any other way we can drop a slot periodically which is already synced with elastic?
@AnkitKundariya When you drop a replication slot, Postgres will remove the WAL logs and new transactions will no longer be 'recorded'. Therefor, there's no way for pgsync to 'replay' those changes and sync them to Elasticsearch once the replication slot is dropped.
Your Elasticsearch index will obviously still remain populated up the the point where the replication slot was dropped, but if you want to start syncing again you'll have to re-index.
Are you certain that all 32 of your replication slots are actually active?
@AnkitKundariya
I would second @voyc-jean suggestion and clean up unused replication slots.
-- list the current slots select * from pg_replication_slots;
-- drop the unused slot select pg_drop_replication_slot('your_slot_name');