[PostgreSQL] Direct select and read-only operations to RO replicas
Hi,
I'm running minetestserver on Kubernetes with CloudnativePG as PostgreSQL operator on 3 nodes with DAS. This gives 3 DB ClusterIP services for the minetestdb:
- minetestdb-r (for any node)
- minetestdb-ro (for replias only)
- minetestdb-rw (for the active RW node)
In world.mt, the pgsql_mapserver_connection is of course set to the rw Service:
pgsql_mapserver_connection = host=minetestdb-rw port=5432 user=minetest password=...
Would it be possible for mapserver to read an extra entry if exists such as:
pgsql_mapserver_connection_ro = host=minetestdb-ro port=5432 user=minetest password=...
to redirect all SELECT queries to the read-only nodes instead of loading the main DB used by the minetestserver ?
to redirect all SELECT queries to the read-only nodes instead of loading the main DB used by the minetestserver ?
Possible, yes, alternatively we could just separate the map-database (ro) and mapserver-database (mostly ro but also rw)
I'm curious: do you have that much load that you need to separate the db-connection or is there another reason? :thinking:
Hi,
I like the idea to have a generic record like pgsql_mapserver_connection_ro, because minetestserver could also benefit from it. (thinking wide).
For the inquisitively gifted, I prefered to buy 3 relatively small nodes for easy maintenances and to survive hardware failures. This gives me many options for storage but basically:
- High performance DAS for cluserizable workloads, such as the DB
- Low performance shared HA storage based on cis-cephfs. Of course Ceph works very well, but as any HA/NAS/Replicated/Shared storage, it can never approach DAS PCIe performances. Minetest is one of the many services running there.
There are relatively few simultaneous users on the server (2-7), and during 3-6h/day.
After 2 years the DB is 9GB, and the "initial rendering complete" took 8 hours. With the main DB high in CPU and of course the RO replicas doing almost nothing. The initial redering time could probably be cut by 2 or more by using the replicas.
When progressing through "incremental rendering" with 1 user connected, I have about this load:
So about 1 core for the mapserver, and half a core for the main DB and the minetestserver.
During the initial rendering, it was 1 core for the DB and 0.5 core for the mapserver.
This generated 347728 tiles for a total of 900MB in mapserver.tiles so far. (it's running for 12h, including the inital 8h)
This could raise the next complementary feature request: store the tiles in DB ;-) (Shared on NAS are not good with small files)
Good night !