ainlolcat
ainlolcat
I suppose this issue has something common with https://github.com/zalando/patroni/issues/422 but with more specific cluster configuration. I suppose we can see this as label on pod with role (like existing master/replica...
I think cascade replication depends on hardware or/and network. For example I can allow 1>2>3>4 but dont want 1>3>2>4 because 3 and 4 are in different network or have slower...
We can specify it during start but cannot change it if some node failed. For example if we have 1>2>3>4 and 2 was damaged or cannot start replication or anything...
Without fixed names and roles it will be hard to describe such topology. I can design algorithm for our topology but cannot generalize it for any possible. My best proposal...
I want to implement this feature so it will be nice to come to agreement so we can sync branches in future.
Yes. Not everyone likes someone messing around with database with RW requests. We have custom tests for simple RW check - insert/update/select/delete in special table which pretty small but still...
In some cases (logging configuration error, core dumps, parts of some backup, another application error, stuck wals because of abandoned replication slot, etc) we can see full disk only on...
Today I had issue with failover - someone forgot to consume data from replication slot and postgres starts acting funny after disk has been filled. It starts and tries to...