database-proxy
database-proxy copied to clipboard
Make the pool of pools DRYer
While implementing the Oracle
client, I found my self needing another instance of a pool of pools
to setup.
This is centralizing the pool of pools
so we can add new clients using the same pool of pools Map
as well as having one unique way of closing all connection and clear up the pool at once at once.
I'm looking at https://github.com/observablehq/database-proxy/blob/795cd62bac6fc5c7ee79b1b96fc21f06855d0e0d/lib/mysql.js#L10 and https://github.com/observablehq/database-proxy/blob/795cd62bac6fc5c7ee79b1b96fc21f06855d0e0d/lib/postgres.js#L9 and they seem to work differently?
We should probably be careful that we don't overload the definition of what a pool is, or how we use them. mysql's createPool seems to automatically create and manage a pool of 10 connections (the default), while pg seems to default to 20.
as well as having one unique way of closing all connection and clear up the pool at once at once
What would be a use case to do this across all connectors, instead of having the connectors clean up after themselves?
I'm looking at
https://github.com/observablehq/database-proxy/blob/795cd62bac6fc5c7ee79b1b96fc21f06855d0e0d/lib/mysql.js#L10
and https://github.com/observablehq/database-proxy/blob/795cd62bac6fc5c7ee79b1b96fc21f06855d0e0d/lib/postgres.js#L9
and they seem to work differently? We should probably be careful that we don't overload the definition of what a pool is, or how we use them. mysql's createPool seems to automatically create and manage a pool of 10 connections (the default), while pg seems to default to 20.
Yeah, this is not exactly the same thing. These are pool of connection to a given host
. The data connector service is managing multiple host
, and each of the host
have a dedicated pool
of connection to thehost
.
The pools.js
is a pool where we keep track of all the opened pool MAP: host > pool
, not the pool
of connection
.
And yes, each type of client has its own way of managing a pool of connections.
as well as having one unique way of closing all connection and clear up the pool at once at once
What would be a use case to do this across all connectors, instead of having the connectors clean up after themselves?
In case the client application (i.e: data-connector
) shuts down, we want to graciously free all the connections, from all the pools to every hosts.
In case the client application (i.e: data-connector) shuts down, we want to graciously free all the connections, from all the pools to every hosts.
Then all you need to collect is a callback, right? Register a destroy() callback (or something like that), and if the connection closes unregister it?
But I'm wondering if control could be inverted here. Maybe pass in something like Observable's invalidation promise?
But I'm wondering if control could be inverted here. Maybe pass in something like Observable's invalidation promise?
... Actually, I think an AbortController would be a good choice here?
But I'm wondering if control could be inverted here. Maybe pass in something like Observable's invalidation promise?
... Actually, I think an AbortController would be a good choice here?
This it the TCP
connection to the database server, not the HTTP
connection to the client. But it is the same idea we are trying to implement here. What is Observable's invalidation promise ?
This it the TCP connection to the database server, not the HTTP connection to the client.
Shouldn't matter, you can still create an abort signal and subscribe to it: https://blog.logrocket.com/complete-guide-abortcontroller-node-js/
What is Observable's invalidation promise ?
It's a promise that resolves when a cell recomputes: https://github.com/observablehq/stdlib#invalidation
This it the TCP connection to the database server, not the HTTP connection to the client.
Shouldn't matter, you can still create an abort signal and subscribe to it: https://blog.logrocket.com/complete-guide-abortcontroller-node-js/
I understand the use case for http
in-flight request needing to be aborted. But here we are looking to close "idling" socket connection to the database server.
Yes, my idea with the AbortController was that you pass in the signal and have the connector subscribe to the abort event:
signal.addEventListener("abort", () => { /* kill connection */ }, { once: true })
It's just a convenient convention that also allows you to unsubscribe (instead of having to remove a callback from a Set). But either works. :)
Yes, my idea with the AbortController was that you pass in the signal and have the connector subscribe to the abort event:
signal.addEventListener("abort", () => { /* kill connection */ }, { once: true })
It's just a convenient convention that also allows you to unsubscribe (instead of having to remove a callback from a Set). But either works. :)
Ok ! That makes more sense. Got polarized by the fetch
mention in the provided example.
Yes, my idea with the AbortController was that you pass in the signal and have the connector subscribe to the abort event:
signal.addEventListener("abort", () => { /* kill connection */ }, { once: true })
It's just a convenient convention that also allows you to unsubscribe (instead of having to remove a callback from a Set). But either works. :)
I think I need more education on this pattern. Let's pair on this at some point. Added here.