akka-persistence-jdbc
akka-persistence-jdbc copied to clipboard
Provide a way to configure maxConnections when using a JNDI name
Slick 3.2 requires knowledge about the maximum number of connections available from a data source. When using an external data source, this value needs to be passed in explicitly (see https://github.com/slick/slick/pull/1685).
Currently, Akka Persistence JDBC passes None
for this value, which is only meant to be use for connection pools with no limit.
Good find.
I have actually been thinking of changing how akka-persistence-jdbc loads/created the slick database. In the current setup, akka-persistence jdbc loads the slick config (which initializes a database pool) once for the jdbc-journal
, jdbc-snapshot-store
and the jdbc-read-journal
(as in the config). This allows using different database config for the journal and the snapshot store, however the disadvantage is that the slick database is initializes three times (which initializes three connection pools and thread pools).
I am not sure when I will be working on this, so feel free to create a PR which resolves this issue only
@WellingR We work around the duplicate Slick database initialization already in Lagom by using JNDI. So the data source is constructed separately from Slick & Akka Persistence JDBC and reused by all of the stores.
I think that's separate from this issue, since the problem of duplicate pools only happens when not using JNDI, whereas this problem only occurs when you are using JNDI.
I do plan to send a pull request for this issue... I have been discussing with @renatocaval as well. I'm thinking that a config property for maxConnections
for each store is the best solution.
Something like:
jdbc-journal {
slick {
jndiMaxConnections = 20
}
}
...and so on. Users will need to sync that with the actual size of the connection pool they configure elsewhere.
What do you think?
I agree that those are separate problems. The possibility to configure jndiMaxConnections
seems to be a good solution, perhaps we could log a warning when this value is missing from the config.
Why would we create a new config property? We have already maxConnections
.
Moreover, what I see on Slick code base is that when using Hikari, maxConnections
is used to configure the connection pool and to configure the AsyncExecutor
First Hikari pool size is set https://github.com/slick/slick/blob/65a59491fdf238bb0af581fefd80447bfd2540f9/slick-hikaricp/src/main/scala/slick/jdbc/hikaricp/HikariCPJdbcDataSource.scala#L47-L48
then hikari pool size is used to configure the maxConnection https://github.com/slick/slick/blob/65a59491fdf238bb0af581fefd80447bfd2540f9/slick-hikaricp/src/main/scala/slick/jdbc/hikaricp/HikariCPJdbcDataSource.scala#L18
Finally, the JdbcDataSource.maxConnections
is used to configure the AsyncExecutor
https://github.com/slick/slick/blob/65a59491fdf238bb0af581fefd80447bfd2540f9/slick/src/main/scala/slick/jdbc/JdbcBackend.scala#L290-L294
The bottom line is, the value used to define maxConnections
on the connection pool must be propagated to the AsyncExecutor
.
That happens with then using Hikari, but not when using any other JdbcDataSource
. Also not when configuring using a JNDI name.
I will fill an issue on Slick + PR for this.
On our side, the workaround would be to use maxConnections
to configure the DS using JNDI and propagate that same value down to the AsyncExecutor
Filled issue in Slick project: https://github.com/slick/slick/issues/1814
and corresponding PR: https://github.com/slick/slick/pull/1815
I realised that this can't be properly done without also implementing what @WellingR suggested about avoiding the creation of different pools.
The bottom line is:
When configured through plain configuration, there is no harm in having many pools. Especially because each Slick Database will use its own pool in combination with a correctly configured AsyncExecutor
.
When using JNDI, each piece (journal, snapshot and read-journal) will share the same DataSource
, but will get a wrongly configured AsyncExecutor
.
Connections will be shared across then, but each will have it's on AsyncExecutor
. Since the AsyncExecutor
needs to be fine-tuned with the connection pool, I can only assume that we are messing up it all the way down.
On the other hand, we may want to still provide the possibility to have distinct pools and AsyncExec
. Write requirements often differs from read requirements.
Wouldn't it be possible to divide up the number of connections across the AsyncExecutor
s . If the configured number of max connections is lower than the real size of the connection pool then there should be no real problem right? The AsyncExecutor
would not use up all connections, but it leave some connections available for other executors, which is probably what we want in this case.
Yes, I agree @WellingR. This is why I suggested maxConnections
needs to be configured per plugin. If you are sharing one DataSource
between plugins with JNDI, you'll also need to allocate a maximum number of connections to each one so that the total doesn't exceed the maximum in the pool.
If you want to share an AsyncExecutor
between the plugins as well, you'd use jndiDbName
instead of jndiName
to share a complete Slick Database instance including its AsyncExecutor
.
Hi. I get the error "Having maxConnection > maxThreads can result in deadlocks if transactions or database locks are used." in my current set-up:
<akka.version>2.6.9</akka.version>
<akka.management.version>1.0.8</akka.management.version>
<slick.version>3.3.2</slick.version>
I've configured a datasource for akka-persistence-jdbc in my application.conf
akka-persistence-jdbc {
shared-databases {
slick {
profile = "slick.jdbc.OracleProfile$"
jndiName = "java:/jboss/datasources/myDS"
}
}
}
jdbc-journal {
use-shared-db = "slick"
}
jdbc-snapshot-store {
use-shared-db = "slick"
}
jdbc-read-journal {
use-shared-db = "slick"
}
As far as I can see there is no configuration option for my case in order to configure maxConnections etc. Correct?
I can't really follow the discussion here but is there currently a solution in order to get rid of this message? And if not: is this actually a problem?
Thanks Oliver
Your jndi data source probably has a connetion pool. You need to make sure that the numThreads
setting has a higher value than the max number of connections in your connection pool.
But I think this warning will not go away, hence this issue. As far as I know it is very unlikely that deadlock issue occurs at all.
Thanks @WellingR . I configured the following and I still get the error (the pool has 30 connections):
akka-persistence-jdbc {
shared-databases {
slick {
profile = "slick.jdbc.OracleProfile$"
jndiName = "java:/jboss/datasources/identityDS"
numThreads = 40
}
}
}
So I will just hope it does not happen ;-) Thanks!
@olijaun, unfortunately, I don't think the numThreads
config will be used.
When using the jndiName
, it will configure the Slick DB on this line:
https://github.com/akka/akka-persistence-jdbc/blob/master/core/src/main/scala/akka/persistence/jdbc/db/SlickDatabase.scala#L54
The None
is the max connection. As a result, it will configure an AsyncExecutor with 20 threads (hardcoded default). Therefore less than the number of connections.
You will need to configure your connection pool to 20 connections.