`max_connections` threshold strategies for concurrent multi-database access
Obviously the pool is great when you're trying to limit concurrent connections to a single database. But in the scenario where your app is simultaneously reaching out to multiple databases on the same postgres server, it's not too hard to blow past the server's max_connections limit.
Any interest in a PR that would create a mechanism to throttle total clients across pools for a given host? Usage might be something like the following:
const serverPools = new pg.ServerPools();
const pool = serverPools.getOrCreate(connInfoForSpecificDb);
await pool.query('SELECT NOW()');
Obviously, there are lots of questions/details surrounding things like how you might throttle and fairly, but just curious what you think of the idea @brianc .
Thanks.
@adamhamlin Is creating separate pg pool per used database not an option for you?
@adamhamlin Is creating separate pg pool per used database not an option for you?
@alpharder unfortunately, not.
You could imagine a scenario where I need to reach out to 50 different databases. If I set the max connections per pool at 10, I could theoretically have 500 server connections. If my server's max_connections setting is 100 (the default), I can have a problem.
Now sure, I could configure each pool to only allow 2 connections per pool, but then I'm potentially limiting concurrency. Or I could increase max_connections, but maybe I can't/can't afford to beef up my server.
My problem is worse if I don't know ahead of time how many databases I need to connect to (like a DB crawler, perhaps).
@adamhamlin yeah, I see...
The only thing that comes to my mind is PGBouncer
Yeah, potentially. But that's a much bigger lift and involves other parts of the stack.
I'm mostly interested in an application code solution. Of course, these limits would only be enforced for a single pg.ServerPools instance--but that's no different than how pg.Pool works today.
I think you should create a package for this!
@charmander thanks, yeah I think it could be useful. Tho I don't know if it can be done "right" as a new package vs. an addition to pg.Pool with access to all the pool internals.
It can.
I think a "fair" paradigm when approaching max_connections might involve temporarily lowering the number of connections that a single database pool can have to allow for more pools to be created/utilized. That's the kind of thing I think one might want (need?) access to the internals for