Ian Barwick
Ian Barwick
Right now the best option is doing something with the output of `repmgr cluster show --csv`. Note that if you have more than one node in recovery, this won't tell...
How are you storing the password?
Is `~/.pgpass` present and correct on all nodes? From `dragon03` can you execute: `ssh -o Batchmode=yes -q -o ConnectTimeout=10 dragon01 env` and attach the output?
It's entirely possible that there are differences in the environment when executing a command via ssh, and when logging in directly and executing the same command. Try (from `dragon03`): `ssh...
Then please try (from `dragon03`): `ssh -o Batchmode=yes -q -o ConnectTimeout=10 dragon01 "psql -d 'user=repmgr connect_timeout=10 replication=1 host=dragon03 port=26432 fallback_application_name=repmgr' -c 'IDENTIFY_SYSTEM'"`
OK, that narrows things down a bit, will see if the issue can be reproduced.
If these are the full contents of the `repmgr.conf` files: ``` node_id=1 node_name='sbx2' conninfo='host=sbx2 user=repmgr dbname=repmgr connect_timeout=2' data_directory='/home/postgres/data' ``` ``` node_id=4 node_name='stt1' conninfo='host=stt1 user=repmgr dbname=repmgr connect_timeout=2' data_directory='/var/lib/pgsql/12/data' ``` you'll need...
Hi Thanks for the detailed report; working through the backlog here. > First of all, I want to give my thanks for providing repmgr suite for managing postgres replication and...
Here the standby is presumably still running; it is only marked inactive if it is *not* running.
Aha, in that case the repmgrd on the standby probably didn't get a chance to update the metadata. This is probably something we can improve on. Please note that it...